AWS Kinesis Data Streams vs SQS

Kinesis Data Streams vs SQS

Kinesis Data Streams vs SQS

Purpose

  • Amazon Kinesis Data Streams
    • allows real-time processing of streaming big data and the ability to read and replay records to multiple Amazon Kinesis Applications.
    • Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications that read from the same Amazon Kinesis stream (for example, to perform counting, aggregation, and filtering).
  • Amazon SQS
    • offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices.
    • It moves data between distributed application components and helps decouple these components.
    • provides common middleware constructs such as dead-letter queues and poison-pill management.
    • provides a generic web services API and can be accessed by any programming language that the AWS SDK supports.
    • supports both standard and FIFO queues

Scaling

  • Kinesis Data Streams offers two capacity modes:
    • Provisioned Mode: Requires manual provisioning and scaling by increasing shards
    • On-Demand Mode (November 2021): Fully managed with automatic scaling – no manual shard management required. Default capacity of 4 MB/s write, scales up to 200 MB/s (or 1 GB/s with limit increase)
  • SQS is fully managed, highly scalable and requires no administrative overhead and little configuration
    • Standard Queue: Unlimited throughput, nearly unlimited transactions per second
    • FIFO Queue: Default 300 TPS per API action, up to 3,000 TPS with high throughput mode (up to 70,000 TPS in select regions)

Ordering

  • Kinesis provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Kinesis Applications
  • SQS Standard Queue does not guarantee data ordering and provides at least once delivery of messages
  • SQS FIFO Queue guarantees data ordering within the message group and exactly-once processing

Data Retention Period

  • Kinesis Data Streams stores the data for up to 24 hours, by default, and can be extended to 365 days (8760 hours)
  • SQS stores the message for up to 4 days, by default, and can be configured from 1 minute to 14 days but clears the message once deleted by the consumer

Delivery Semantics

  • Kinesis and SQS Standard Queue both guarantee at least once delivery of the message.
  • SQS FIFO Queue guarantees exactly once delivery and processing

Parallel Clients

  • Kinesis supports multiple consumers reading from the same stream concurrently
    • Standard (shared throughput): 2 MB/sec per shard shared across all consumers
    • Enhanced fan-out: 2 MB/sec per shard per consumer (dedicated throughput)
  • SQS allows the messages to be delivered to only one consumer at a time and requires multiple queues to deliver messages to multiple consumers

Use Cases

  • Kinesis use cases requirements
    • Ordering of records.
    • Ability to consume records in the same order a few hours later
    • Ability for multiple applications to consume the same stream concurrently
    • Routing related records to the same record processor (as in streaming MapReduce)
    • Real-time analytics and processing
    • Data replay capability for reprocessing
  • SQS uses cases requirements
    • Messaging semantics like message-level ack/fail and visibility timeout
    • Leveraging SQS’s ability to scale transparently
    • Dynamically increasing concurrency/throughput at read time
    • Individual message delay, which can be delayed
    • Decoupling application components
    • Simple message queuing without need for replay

Key Differences Summary

Feature Kinesis Data Streams SQS
Purpose Real-time streaming data processing Message queuing and decoupling
Scaling Provisioned or On-Demand (auto-scaling) Fully managed (auto-scaling)
Ordering Guaranteed per shard Standard: No, FIFO: Yes (per message group)
Retention 24 hours to 365 days 1 minute to 14 days
Replay ✅ Supported ❌ Not supported
Multiple Consumers ✅ Yes (concurrent) ❌ No (one at a time)
Delivery Semantics At least once Standard: At least once, FIFO: Exactly once
Latency ~70-200 ms Single-digit milliseconds

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?
    1. Amazon Kinesis
    2. AWS Data Pipeline
    3. Amazon AppStream
    4. Amazon Simple Queue Service
  2. Your customer is willing to consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?
    1. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
    2. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs (Can perform real time analysis and stores data for 24 hours which can be extended to 365 days)
    3. Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs (CloudTrail is only for auditing)
    4. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs (EMR is for batch analysis)
  3. A company needs to process streaming data with multiple independent consumers that need to read the same data concurrently. Which service should they use?
    1. SQS Standard Queue
    2. SQS FIFO Queue
    3. Kinesis Data Streams
    4. Amazon SNS
  4. A company wants to decouple microservices and needs exactly-once message processing with ordering guarantees. Which service should they use?
    1. Kinesis Data Streams
    2. SQS Standard Queue
    3. SQS FIFO Queue
    4. Amazon SNS
  5. A company wants to avoid manual shard management for their streaming data workload. Which Kinesis capacity mode should they use? (Assume November 2021 or later)
    1. Provisioned mode with Auto Scaling
    2. On-Demand mode
    3. Enhanced fan-out mode
    4. Standard mode

References

AWS Kinesis Data Streams – KDS

AWS Kinesis Data Streams – KDS

  • Amazon Kinesis Data Streams is a streaming data service that enables real-time processing of streaming data at a massive scale.
  • Kinesis Streams enables building of custom applications that process or analyze streaming data for specialized needs.
  • Kinesis Streams features
    • handles provisioning, deployment, ongoing-maintenance of hardware, software, or other services for the data streams.
    • manages the infrastructure, storage, networking, and configuration needed to stream the data at the level of required data throughput.
    • synchronously replicates data across three AZs in an AWS Region, providing high availability and data durability.
    • stores records of a stream for up to 24 hours, by default, from the time they are added to the stream. The limit can be raised to up to 365 days (8760 hours) by enabling extended data retention.
  • Data such as clickstreams, application logs, social media, etc can be added from multiple sources and within seconds is available for processing to the Kinesis Applications.
  • Kinesis provides the ordering of records, as well as the ability to read and/or replay records in the same order to multiple applications.
  • Kinesis is designed to process streaming big data and the pricing model allows heavy PUTs rate.
  • Multiple Kinesis Data Streams applications can consume data from a stream, so that multiple actions, like archiving and processing, can take place concurrently and independently.
  • Kinesis Data Streams application can start consuming the data from the stream almost immediately after the data is added and put-to-get delay is typically less than 1 second.
  • Kinesis Streams is useful for rapidly moving data off data producers and then continuously processing the data, be it to transform the data before emitting to a data store, run real-time metrics and analytics, or derive more complex data streams for further processing
    • Accelerated log and data feed intake: Data producers can push data to Kinesis stream as soon as it is produced, preventing any data loss and making it available for processing within seconds.
    • Real-time metrics and reporting: Metrics can be extracted and used to generate reports from data in real-time.
    • Real-time data analytics: Run real-time streaming data analytics.
    • Complex stream processing: Create Directed Acyclic Graphs (DAGs) of Kinesis Applications and data streams, with Kinesis applications adding to another Amazon Kinesis stream for further processing, enabling successive stages of stream processing.
  • Kinesis limits
    • stores records of a stream for up to 24 hours, by default, which can be extended to max 365 days (8760 hours)
    • maximum size of a data blob (the data payload before Base64-encoding) within one record is 1 megabyte (MB)
    • Each shard can support up to 1000 PUT records per second.
  • S3 is a cost-effective way to store the data, but not designed to handle a stream of data in real-time

Kinesis Data Streams Terminology

 

Kinesis Architecture

 

  • Data Record
    • A record is the unit of data stored in a Kinesis data stream.
    • A record is composed of a sequence number, partition key, and data blob, which is an immutable sequence of bytes.
    • Maximum size of a data blob is 1 MB
    • Partition key
      • Partition key is used to segregate and route records to different shards of a stream.
      • A partition key is specified by the data producer while adding data to a Kinesis stream.
    • Sequence number
      • A sequence number is a unique identifier for each record.
      • Kinesis assigns a Sequence number, when a data producer calls PutRecord or PutRecords operation to add data to a stream.
      • Sequence numbers for the same partition key generally increase over time; the longer the time period between PutRecord or PutRecords requests, the larger the sequence numbers become.
  • Data Stream
    • Data stream represents a group of data records.
    • Data records in a data stream are distributed into shards.
  • Shard
    • A shard is a uniquely identified sequence of data records in a stream.
    • Streams are made of shards and are the base throughput unit of a Kinesis stream, as pricing is per shard basis.
    • Each shard supports up to 5 transactions per second for reads, up to a maximum total data read rate of 2 MB per second, and up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second (including partition keys)
    • Each shard provides a fixed unit of capacity. If the limits are exceeded, either by data throughput or the number of PUT records, the put data call will be rejected with a ProvisionedThroughputExceeded exception.
    • This can be handled by
      • Implementing a retry on the data producer side, if this is due to a temporary rise of the stream’s input data rate
      • Dynamically scaling the number of shared (resharding) to provide enough capacity for the put data calls to consistently succeed
  • Capacity Mode
    • A data stream capacity mode determines the pricing and how the capacity is managed
    • Kinesis Data Streams supports two capacity modes: On-Demand (launched November 2021) and Provisioned
      • On-demand mode (Launched November 2021)
        • KDS automatically manages the shards in order to provide the necessary throughput.
        • No capacity planning required – automatically scales to handle gigabytes of write and read throughput per minute.
        • Default capacity of 4 MB/s write (4000 records/s), can scale up to 200 MB/s (or 1 GB/s with limit increase via AWS Support).
        • You are charged only for the actual throughput used (per GB of data written and read).
        • KDS automatically accommodates the workloads’ throughput needs as they ramp up or down.
        • Ideal for unpredictable or variable workloads.
      • Provisioned mode
        • Number of shards for the data stream must be specified.
        • Total capacity of a data stream is the sum of the capacities of its shards.
        • Shards can be increased or decreased in a data stream as needed and you are charged for the number of shards at an hourly rate.
        • Provides predictable costs for steady workloads.
  • Retention Period
    • All data is stored for 24 hours, by default, and can be increased to 8760 hours (365 days) maximum.
    • Extended data retention (beyond 24 hours) and long-term data retention (beyond 7 days up to 365 days) incur additional charges.
  • Producers
    • A producer puts data records into Kinesis data streams.
    • To put data into the stream, the name of the stream, a partition key, and the data blob to be added to the stream should be specified.
    • Partition key is used to determine which shard in the stream the data record is added to.
  • Consumers
    • A consumer is an application built to read and process data records from Kinesis data streams.

Kinesis Security

  • supports Server-side encryption using Key Management Service (KMS) for encrypting the data at rest.
  • supports writing encrypted data to a data stream by encrypting and decrypting on the client side.
  • supports encryption in transit using HTTPS endpoints.
  • supports Interface VPC endpoint to keep traffic between VPC and Kinesis Data Streams from leaving the Amazon network. Interface VPC endpoints don’t require an IGW, NAT device, VPN connection, or Direct Connect.
  • Cross-Region PrivateLink Support (November 2025): Interface VPC endpoints now support cross-region connectivity to Kinesis Data Streams within the same AWS partition.
  • FIPS 140-3 Support (September 2024): Kinesis Data Streams supports FIPS 140-3 enabled interface VPC endpoints for compliance requirements.
  • IPv6 Support (May 2025): Kinesis Data Streams supports IPv6 with dual-stack AWS PrivateLink interface VPC endpoints.
  • integrated with IAM to control access to Kinesis Data Streams resources.
  • integrated with CloudTrail, which provides a record of actions taken by a user, role, or an AWS service in Kinesis Data Streams.

Kinesis Producer

Data to Kinesis Data Streams can be added via API/SDK (PutRecord and PutRecords) operations, Kinesis Producer Library (KPL), or Kinesis Agent.

  • API
    • PutRecord & PutRecords operations are synchronous operation that sends single/multiple records to the stream per HTTP request.
    • use PutRecords to achieve a higher throughput per data producer
    • helps manage many aspects of Kinesis Data Streams (including creating streams, resharding, and putting and getting records)
  • Kinesis Agent
    • is a pre-built Java application that offers an easy way to collect and send data to the Kinesis stream.
    • can be installed on Linux-based server environments such as web servers, log servers, and database servers
    • can be configured to monitor certain files on the disk and then continuously send new data to the Kinesis stream
  • Kinesis Producer Library (KPL)
    • is an easy-to-use and highly configurable library that helps to put data into a Kinesis stream.
    • provides a layer of abstraction specifically for ingesting data
    • presents a simple, asynchronous, and reliable interface that helps achieve high producer throughput with minimal client resources.
    • batches messages, as it aggregates records to increase payload size and improve throughput.
    • Collects records and uses PutRecords to write multiple records to multiple shards per request
    • Writes to one or more Kinesis data streams with an automatic and configurable retry mechanism.
    • Integrates seamlessly with the Kinesis Client Library (KCL) to de-aggregate batched records on the consumer
    • Submits CloudWatch metrics to provide visibility into performance
  • Third Party and Open source
    • Log4j appender
    • Apache Kafka
    • Flume, fluentd, etc.

Kinesis Consumers

  • Kinesis Application is a data consumer that reads and processes data from a Kinesis Data Stream and can be built using either Kinesis API or Kinesis Client Library (KCL)
  • Shards in a stream provide 2 MB/sec of read throughput per shard, by default, which is shared by all the consumers reading from a given shard.
  • Kinesis Client Library (KCL)
    • is a pre-built library with multiple language support
    • delivers all records for a given partition key to same record processor
    • makes it easier to build multiple applications reading from the same stream for e.g. to perform counting, aggregation, and filtering
    • handles complex issues such as adapting to changes in stream volume, load-balancing streaming data, coordinating distributed services, and processing data with fault-tolerance
    • uses a unique DynamoDB table to keep track of the application’s state, so if the Kinesis Data Streams application receives provisioned-throughput exceptions, increase the provisioned throughput for the DynamoDB table
  • Kinesis Connector Library
    • is a pre-built library that helps you easily integrate Kinesis Streams with other AWS services and third-party tools.
    • Kinesis Client Library is required for Kinesis Connector Library
    • is legacy and can be replaced by Lambda or Kinesis Data Firehose
  • Kinesis Storm Spout is a pre-built library that helps you easily integrate Kinesis Streams with Apache Storm
  • AWS Lambda, Kinesis Data Firehose, and Kinesis Data Analytics also act as consumers for Kinesis Data Streams

Kinesis Enhanced fan-out

  • allows customers to scale the number of consumers reading from a data stream in parallel, while maintaining high performance and without contending for read throughput with other consumers.
  • provides logical 2 MB/sec throughput pipes between consumers and shards for Kinesis Data Streams Consumers.
  • Each enhanced fan-out consumer gets dedicated 2 MB/sec per shard, independent of other consumers.
  • Reduces latency to ~70 ms compared to ~200 ms for shared throughput consumers.
  • Charged per consumer-shard-hour plus data retrieval charges.

AWS Kinesis Shared Throughput vs Enhanced Fan-out

Kinesis Data Streams Sharding

  • Resharding helps to increase or decrease the number of shards in a stream in order to adapt to changes in the rate of data flowing through the stream.
  • Resharding operations support shard split and shard merge.
    • Shard split helps divide a single shard into two shards. It increases the capacity and the cost.
    • Shard merge helps combine two shards into a single shard. It reduces the capacity and the cost.
  • Resharding is always pairwise and always involves two shards.
  • The shard or pair of shards that the resharding operation acts on are referred to as parent shards. The shard or pair of shards that result from the resharding operation are referred to as child shards.
  • Kinesis Client Library tracks the shards in the stream using a DynamoDB table and discovers the new shards and populates new rows in the table.
  • KCL ensures that any data that existed in shards prior to the resharding is processed before the data from the new shards, thereby, preserving the order in which data records were added to the stream for a particular partition key.
  • Data records in the parent shard are accessible from the time they are added to the stream to the current retention period.
  • Note: With On-Demand mode, resharding is handled automatically by AWS, eliminating manual shard management.

Kinesis Data Streams vs Kinesis Firehose

Refer post @ Kinesis Data Streams vs Kinesis Firehose

Kinesis Data Streams vs. Firehose

Kinesis Data Streams vs SQS

Refer post @ Kinesis Data Streams vs SQS

Kinesis vs S3

Amazon Kinesis vs S3

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?
    1. Amazon Kinesis
    2. AWS Data Pipeline
    3. Amazon AppStream
    4. Amazon Simple Queue Service
  2. You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile devices. The votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use?
    1. Amazon DynamoDB
    2. Amazon Redshift
    3. Amazon Kinesis
    4. Amazon Simple Queue Service
  3. Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets. Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable, elastic and parallel. The results of the analytic processing should be persisted for data mining. Which architecture outlined below will meet the initial requirements for the collection platform?
    1. Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
    2. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR. (refer link)
    3. Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
    4. Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.
  4. Your customer is willing to consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?
    1. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
    2. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs (Can perform real time analysis and stores data for 24 hours which can be extended to 365 days)
    3. Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs (CloudTrail is only for auditing)
    4. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs (EMR is for batch analysis)
  5. You require the ability to analyze a customer’s clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data?
    1. Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
    2. Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
    3. Write click events directly to Amazon Redshift and then analyze with SQL
    4. Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon RDS and analyze with SQL
  6. Your social media monitoring application uses a Python app running on AWS Elastic Beanstalk to inject tweets, Facebook updates and RSS feeds into an Amazon Kinesis stream. A second AWS Elastic Beanstalk app generates key performance indicators into an Amazon DynamoDB table and powers a dashboard application. What is the most efficient option to prevent any data loss for this application?
    1. Use AWS Data Pipeline to replicate your DynamoDB tables into another region.
    2. Use the second AWS Elastic Beanstalk app to store a backup of Kinesis data onto Amazon Elastic Block Store (EBS), and then create snapshots from your Amazon EBS volumes.
    3. Add a second Amazon Kinesis stream in another Availability Zone and use AWS data pipeline to replicate data across Kinesis streams.
    4. Add a third AWS Elastic Beanstalk app that uses the Amazon Kinesis S3 connector to archive data from Amazon Kinesis into Amazon S3.
  7. You need to replicate API calls across two systems in real time. What tool should you use as a buffer and transport mechanism for API call events?
    1. AWS SQS
    2. AWS Lambda
    3. AWS Kinesis (AWS Kinesis is an event stream service. Streams can act as buffers and transport across systems for in-order programmatic events, making it ideal for replicating API calls across systems)
    4. AWS SNS
  8. You need to perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first?
    1. Kinesis Firehose + RDS
    2. Kinesis Firehose + RedShift (Kinesis Firehose provides a managed service for aggregating streaming data and inserting it into RedShift. RedShift also supports ad-hoc queries over well-structured data using a SQL-compliant wire protocol, so the business team should be able to adopt this system easily. Refer link)
    3. EMR using Hive
    4. EMR running Apache Spark
  9. A company wants to avoid manual shard management for their Kinesis Data Streams. Which capacity mode should they use? (Assume November 2021 or later)
    1. Provisioned mode with Auto Scaling
    2. On-Demand mode
    3. Enhanced fan-out mode
    4. Standard mode
  10. A company needs to access Kinesis Data Streams from an on-premises data center privately without traversing the public internet. Which solution should they use? (Assume November 2025 or later)
    1. Use public Kinesis endpoints over VPN
    2. Create Interface VPC endpoints with cross-region PrivateLink support
    3. Use NAT Gateway
    4. Use Internet Gateway with security groups

References

 

AWS Kinesis Data Streams vs Kinesis Data Firehose

Kinesis Data Streams vs. Kinesis Data Firehose

Kinesis Data Streams vs. Firehose

Purpose

  • Kinesis data streams is highly customizable and best suited for developers building custom applications or streaming data for specialized needs.
  • Kinesis Data Firehose handles loading data streams directly into AWS products for processing. Firehose also allows for streaming to S3, OpenSearch Service, Redshift, Apache Iceberg tables, Snowflake, and other destinations, where data can be copied for processing through additional services.

Provisioning & Scaling

  • Kinesis Data Streams offers two capacity modes:
    • Provisioned Mode: Requires manual configuration of shards and scaling. You specify the number of shards needed based on expected throughput.
    • On-Demand Mode (Launched November 2021): Automatically scales to handle gigabytes of write and read throughput per minute without manual shard management. Default capacity of 4 MB/s write (4000 records/s), can scale up to 200 MB/s (or 1 GB/s with limit increase).
  • Kinesis Data Firehose is fully managed and sends data to S3, Redshift, OpenSearch, Apache Iceberg tables, Snowflake, and other destinations. Scaling is handled automatically, up to gigabytes per second, and allows for batching, encrypting, and compressing.

Processing Delay

  • Kinesis Data Streams provides real-time processing with ~200 ms for shared throughput classic single consumer and ~70 ms for the enhanced fan-out consumer.
  • Kinesis Data Firehose provides near real-time processing:
    • Standard Buffering: Minimum buffer time of 60 seconds (1 min), maximum 900 seconds (15 min)
    • Zero Buffering (Announced December 2023): Delivers data within ~5 seconds with no buffering delay, enabling real-time use cases

Data Storage

  • Kinesis Data Streams provide data storage. Data typically is made available in a stream for 24 hours, but for an additional cost, users can gain data availability for up to 365 days (8760 hours).
  • Kinesis Data Firehose does not provide data storage.

Replay

  • Kinesis Data Streams supports replay capability
  • Kinesis Data Firehose does not support replay capability

Producers & Consumers

  • Kinesis Data Streams & Kinesis Data Firehose support multiple producer options including SDK, KPL, Kinesis Agent, IoT, etc.
  • Kinesis Data Streams support multiple consumers option including SDK, KCL, and Lambda, and can write data to multiple destinations. However, they have to be coded.
  • Kinesis Data Firehose consumers are close-ended and support destinations including:
    • Amazon S3
    • Amazon Redshift
    • Amazon OpenSearch Service
    • Amazon OpenSearch Serverless
    • Apache Iceberg Tables (Added October 2024) – Stream data directly into Iceberg format tables in S3
    • Snowflake (with Snowpipe Streaming) – Real-time streaming to Snowflake
    • Splunk
    • Third-party HTTP endpoints (Datadog, Dynatrace, New Relic, MongoDB, Coralogix, Elastic, etc.)

Key Differences Summary

Feature Kinesis Data Streams Kinesis Data Firehose
Capacity Mode Provisioned or On-Demand Fully managed (automatic)
Latency Real-time (~70-200 ms) Near real-time (60s default, ~5s with zero buffering)
Data Retention 24 hours to 365 days No storage
Replay ✅ Supported ❌ Not supported
Consumers Custom (SDK, KCL, Lambda) Pre-defined (S3, Redshift, OpenSearch, Iceberg, Snowflake, etc.)
Use Case Custom processing, real-time analytics ETL, loading to data stores

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your organization needs to ingest a big data stream into its data lake on Amazon S3. The data may stream in at a rate of hundreds of megabytes per second. What AWS service will accomplish the goal with the least amount of management?
    1. Amazon Kinesis Firehose
    2. Amazon Kinesis Streams
    3. Amazon CloudFront
    4. Amazon SQS
  2. Your organization is looking for a solution that can help the business with streaming data several services will require access to read and process the same stream concurrently. What AWS service meets the business requirements?
    1. Amazon Kinesis Firehose
    2. Amazon Kinesis Streams
    3. Amazon CloudFront
    4. Amazon SQS
  3. Your application generates a 1 KB JSON payload that needs to be queued and delivered to EC2 instances for applications. At the end of the day, the application needs to replay the data for the past 24 hours. In the near future, you also need the ability for other multiple EC2 applications to consume the same stream concurrently. What is the best solution for this?
    1. Kinesis Data Streams
    2. Kinesis Firehose
    3. SNS
    4. SQS
  4. A company needs to stream data to Amazon S3 with the lowest possible latency (under 10 seconds). Which Kinesis service and configuration should they use? (Assume December 2023 or later)
    1. Kinesis Data Streams with Lambda consumer
    2. Kinesis Data Firehose with zero buffering enabled
    3. Kinesis Data Firehose with 60-second buffer
    4. Kinesis Data Streams with KCL consumer
  5. A company wants to avoid manual shard management for their Kinesis Data Streams. Which capacity mode should they use? (Assume November 2021 or later)
    1. Provisioned mode with Auto Scaling
    2. On-Demand mode
    3. Enhanced fan-out mode
    4. Standard mode
  6. A data analytics team needs to stream real-time data into Apache Iceberg tables in S3 for analytics. Which AWS service supports this natively? (Assume October 2024 or later)
    1. Kinesis Data Streams
    2. Kinesis Data Firehose
    3. AWS Glue Streaming
    4. Amazon MSK

References

AWS Kinesis Data Streams vs Kinesis Data Firehose

  • Kinesis acts as a highly available conduit to stream messages between data producers and data consumers.
  • Data producers can be almost any source of data: system or web log data, social network data, financial trading information, geospatial data, mobile app data, or telemetry from connected IoT devices.
  • Data consumers will typically fall into the category of data processing and storage applications such as Apache Hadoop, Apache Storm, S3, and ElasticSearch.

Kinesis Data Streams vs. Firehose

Purpose

  • Kinesis data streams is highly customizable and best suited for developers building custom applications or streaming data for specialized needs.
  • Kinesis Data Firehose handles loading data streams directly into AWS products for processing. Firehose also allows for streaming to S3, OpenSearch Service, Redshift, Apache Iceberg tables, Snowflake, and other destinations, where data can be copied for processing through additional services.

Provisioning & Scaling

  • Kinesis Data Streams offers two capacity modes:
    • Provisioned Mode: Requires manual configuration of shards and scaling. You specify the number of shards needed based on expected throughput.
    • On-Demand Mode (Launched November 2021): Automatically scales to handle gigabytes of write and read throughput per minute without manual shard management. Default capacity of 4 MB/s write (4000 records/s), can scale up to 200 MB/s (or 1 GB/s with limit increase).
  • Kinesis Data Firehose is fully managed and sends data to S3, Redshift, OpenSearch, Apache Iceberg tables, Snowflake, and other destinations. Scaling is handled automatically, up to gigabytes per second, and allows for batching, encrypting, and compressing.

Processing Delay

  • Kinesis Data Streams provides real-time processing with ~200 ms for shared throughput classic single consumer and ~70 ms for the enhanced fan-out consumer.
  • Kinesis Data Firehose provides near real-time processing:
    • Standard Buffering: Minimum buffer time of 60 seconds (1 min), maximum 900 seconds (15 min)
    • Zero Buffering (Announced December 2023): Delivers data within ~5 seconds with no buffering delay, enabling real-time use cases

Data Storage

  • Kinesis Data Streams provide data storage. Data typically is made available in a stream for 24 hours, but for an additional cost, users can gain data availability for up to 365 days (8760 hours).
  • Kinesis Data Firehose does not provide data storage.

Replay

  • Kinesis Data Streams supports replay capability
  • Kinesis Data Firehose does not support replay capability

Producers & Consumers

  • Kinesis Data Streams & Kinesis Data Firehose support multiple producer options including SDK, KPL, Kinesis Agent, IoT, etc.
  • Kinesis Data Streams support multiple consumers option including SDK, KCL, and Lambda, and can write data to multiple destinations. However, they have to be coded.
  • Kinesis Data Firehose consumers are close-ended and support destinations including:
    • Amazon S3
    • Amazon Redshift
    • Amazon OpenSearch Service
    • Amazon OpenSearch Serverless
    • Apache Iceberg Tables (Added October 2024) – Stream data directly into Iceberg format tables in S3
    • Snowflake (with Snowpipe Streaming) – Real-time streaming to Snowflake
    • Splunk
    • Third-party HTTP endpoints (Datadog, Dynatrace, New Relic, MongoDB, Coralogix, Elastic, etc.)

Key Differences Summary

Feature Kinesis Data Streams Kinesis Data Firehose
Capacity Mode Provisioned or On-Demand Fully managed (automatic)
Latency Real-time (~70-200 ms) Near real-time (60s default, ~5s with zero buffering)
Data Retention 24 hours to 365 days No storage
Replay ✅ Supported ❌ Not supported
Consumers Custom (SDK, KCL, Lambda) Pre-defined (S3, Redshift, OpenSearch, Iceberg, Snowflake, etc.)
Use Case Custom processing, real-time analytics ETL, loading to data stores

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your organization needs to ingest a big data stream into its data lake on Amazon S3. The data may stream in at a rate of hundreds of megabytes per second. What AWS service will accomplish the goal with the least amount of management?
    1. Amazon Kinesis Firehose
    2. Amazon Kinesis Streams
    3. Amazon CloudFront
    4. Amazon SQS
  2. Your organization is looking for a solution that can help the business with streaming data several services will require access to read and process the same stream concurrently. What AWS service meets the business requirements?
    1. Amazon Kinesis Firehose
    2. Amazon Kinesis Streams
    3. Amazon CloudFront
    4. Amazon SQS
  3. Your application generates a 1 KB JSON payload that needs to be queued and delivered to EC2 instances for applications. At the end of the day, the application needs to replay the data for the past 24 hours. In the near future, you also need the ability for other multiple EC2 applications to consume the same stream concurrently. What is the best solution for this?
    1. Kinesis Data Streams
    2. Kinesis Firehose
    3. SNS
    4. SQS
  4. A company needs to stream data to Amazon S3 with the lowest possible latency (under 10 seconds). Which Kinesis service and configuration should they use? (Assume December 2023 or later)
    1. Kinesis Data Streams with Lambda consumer
    2. Kinesis Data Firehose with zero buffering enabled
    3. Kinesis Data Firehose with 60-second buffer
    4. Kinesis Data Streams with KCL consumer
  5. A company wants to avoid manual shard management for their Kinesis Data Streams. Which capacity mode should they use? (Assume November 2021 or later)
    1. Provisioned mode with Auto Scaling
    2. On-Demand mode
    3. Enhanced fan-out mode
    4. Standard mode
  6. A data analytics team needs to stream real-time data into Apache Iceberg tables in S3 for analytics. Which AWS service supports this natively? (Assume October 2024 or later)
    1. Kinesis Data Streams
    2. Kinesis Data Firehose
    3. AWS Glue Streaming
    4. Amazon MSK

References

Amazon Data Firehose – KDF

Kinesis Data Firehose

Amazon Data Firehose (formerly Kinesis Data Firehose)

📢 Service Renamed (February 2024): Amazon Kinesis Data Firehose has been renamed to Amazon Data Firehose. The functionality remains the same.

  • Amazon Data Firehose is a fully managed service for delivering real-time streaming data
  • Amazon Data Firehose automatically scales to match the throughput of the data and requires no ongoing administration or need to write applications or manage resources
  • is a data transfer solution for delivering real-time streaming data to destinations such as S3, Redshift, OpenSearch Service, OpenSearch Serverless, Apache Iceberg tables, Snowflake, Splunk, and third-party HTTP endpoints.
  • is NOT Real Time, but Near Real Time as it supports batching and buffers streaming data to a certain size (Buffer Size in MBs) or for a certain period of time (Buffer Interval in seconds) before delivering it to destinations.
    • Zero Buffering (December 2023): Firehose now supports zero buffering, delivering data within ~5 seconds with no buffering delay for real-time use cases.
  • supports data compression, minimizing the amount of storage used at the destination. It currently supports GZIP, ZIP, and SNAPPY compression formats. Only GZIP is supported if the data is further loaded to Redshift.
  • supports data at rest encryption using KMS after the data is delivered to the S3 bucket.
  • supports multiple producers as datasource, which include Kinesis data stream, Kinesis Agent, or the Data Firehose API using the AWS SDK, CloudWatch Logs, CloudWatch Events, or AWS IoT
  • supports out of box data transformation as well as custom transformation using the Lambda function to transform incoming source data and deliver the transformed data to destinations
  • supports source record backup with custom data transformation with Lambda, where Data Firehose will deliver the un-transformed incoming data to a separate S3 bucket.
  • uses at least once semantics for data delivery. In rare circumstances such as request timeout upon data delivery attempt, delivery retry by Firehose could introduce duplicates if the previous request eventually goes through.
  • supports Interface VPC Interface Endpoint (AWS Private Link) to keep traffic between the VPC and Data Firehose from leaving the Amazon network.

Amazon Data Firehose

Amazon Data Firehose Key Concepts

  • Data Firehose delivery stream
    • Underlying entity of Data Firehose, where the data is sent
  • Record
    • Data sent by data producer to a Data Firehose delivery stream
    • Maximum size of a record (before Base64-encoding) is 1024 KB.
  • Data producer
    • Producers send records to Data Firehose delivery streams.
  • Buffer size and buffer interval
    • Data Firehose buffers incoming streaming data to a certain size or for a certain time period before delivering it to destinations
    • Buffer size and buffer interval can be configured while creating the delivery stream
    • Buffer size is in MBs and ranges from 1MB to 128MB for the S3 destination and 1MB to 100MB for the OpenSearch Service destination.
    • Buffer interval is in seconds and ranges from 60 secs to 900 secs (standard buffering)
    • Zero Buffering (December 2023): Set buffer interval to 0 seconds to deliver data within ~5 seconds with no buffering delay
    • Firehose raises buffer size dynamically to catch up and make sure that all data is delivered to the destination, if data delivery to the destination is falling behind data writing to the delivery stream
    • Buffer size is applied before compression.
  • Destination
    • A destination is the data store where the data will be delivered.
    • supports the following destinations:
      • Amazon S3
      • Amazon Redshift
      • Amazon OpenSearch Service
      • Amazon OpenSearch Serverless (added November 2022)
      • Apache Iceberg Tables (added October 2024) – Stream data directly into Iceberg format tables in S3
      • Snowflake – Real-time streaming to Snowflake via Snowpipe Streaming
      • Splunk
      • Third-party HTTP endpoints – Datadog, Dynatrace, New Relic, MongoDB, Coralogix, Elastic, and others

Zero Buffering (December 2023)

  • Amazon Data Firehose now supports zero buffering for real-time data delivery
  • Delivers data within ~5 seconds with no buffering delay
  • Available for destinations: S3, OpenSearch Service, Redshift, and third-party HTTP endpoints
  • Enables real-time use cases that previously required Kinesis Data Streams
  • Trade-off: More frequent deliveries may result in more small files and higher costs

Apache Iceberg Tables Support (October 2024)

  • Amazon Data Firehose can now stream data directly into Apache Iceberg tables in S3
  • Iceberg brings SQL table reliability and ACID transactions to S3 data lakes
  • Supports automatic schema management, partitioning, and compaction
  • Compatible with Athena, EMR, Redshift, Spark, Flink, and other analytics engines
  • Simplifies data lake ingestion without custom ETL code
  • Use cases: Real-time data lake ingestion, streaming analytics, CDC to data lake

Amazon Data Firehose vs Kinesis Data Streams

Kinesis Data Streams vs. Amazon Data Firehose

AWS Certification Exam Practice Questions

  1. A user is designing a new service that receives location updates from 3600 rental cars every hour. The cars location needs to be uploaded to an Amazon S3 bucket. Each location must also be checked for distance from the original rental location. Which services will process the updates and automatically scale?
    1. Amazon EC2 and Amazon EBS
    2. Amazon Data Firehose and Amazon S3
    3. Amazon ECS and Amazon RDS
    4. Amazon S3 events and AWS Lambda
  2. You need to perform ad-hoc SQL queries on massive amounts of well-structured data. Additional data comes in constantly at a high velocity, and you don’t want to have to manage the infrastructure processing it if possible. Which solution should you use?
    1. Data Firehose and RDS
    2. EMR running Apache Spark
    3. Data Firehose and Redshift
    4. EMR using Hive
  3. Your organization needs to ingest a big data stream into their data lake on Amazon S3. The data may stream in at a rate of hundreds of megabytes per second. What AWS service will accomplish the goal with the least amount of management?
    1. Amazon Data Firehose
    2. Amazon Kinesis Data Streams
    3. Amazon CloudFront
    4. Amazon SQS
  4. A startup company is building an application to track the high scores for a popular video game. Their Solution Architect is tasked with designing a solution to allow real-time processing of scores from millions of players worldwide. Which AWS service should the Architect use to provide reliable data ingestion from the video game into the datastore?
    1. AWS Data Pipeline
    2. Amazon Data Firehose
    3. Amazon DynamoDB Streams
    4. Amazon Elasticsearch Service
  5. A company has an infrastructure that consists of machines which keep sending log information every 5 minutes. The number of these machines can run into thousands and it is required to ensure that the data can be analyzed at a later stage. Which of the following would help in fulfilling this requirement?
    1. Use Data Firehose with S3 to take the logs and store them in S3 for further processing.
    2. Launch an Elastic Beanstalk application to take the processing job of the logs.
    3. Launch an EC2 instance with enough EBS volumes to consume the logs which can be used for further processing.
    4. Use CloudTrail to store all the logs which can be analyzed at a later stage.
  6. A company needs to stream data to Amazon S3 with the lowest possible latency (under 10 seconds). Which configuration should they use? (Assume December 2023 or later)
    1. Data Firehose with 60-second buffer
    2. Data Firehose with zero buffering enabled
    3. Kinesis Data Streams with Lambda consumer
    4. Direct PUT to S3
  7. A data analytics team needs to stream real-time data into Apache Iceberg tables in S3 for analytics. Which AWS service supports this natively? (Assume October 2024 or later)
    1. Kinesis Data Streams
    2. Amazon Data Firehose
    3. AWS Glue Streaming
    4. Amazon MSK

References