Amazon Kinesis enables real-time processing of streaming data at massive scale
Kinesis Streams enables building of custom applications that process or analyze streaming data for specialized needs
Kinesis Streams features
handles provisioning, deployment, ongoing-maintenance of hardware, software, or other services for the data streams
manages the infrastructure, storage, networking, and configuration needed to stream the data at the level of required data throughput
synchronously replicates data across three facilities in an AWS Region, providing high availability and data durability
stores records of a stream for up to 24 hours, by default, from the time they are added to the stream. The limit can be raised to up to 7 days by enabling extended data retention
Data such as clickstreams, application logs, social media etc can be added from multiple sources and within seconds is available for processing to the Amazon Kinesis Applications
Kinesis provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Kinesis applications.
Kinesis Streams is useful for rapidly moving data off data producers and then continuously processing the data, be it to transform the data before emitting to a data store, run real-time metrics and analytics, or derive more complex data streams for further processing
Accelerated log and data feed intake: Data producers can push data to Kinesis stream as soon as it is produced, preventing any data loss and making it available for processing within seconds.
Real-time metrics and reporting: Metrics can be extracted and used to generate reports from data in real-time.
Real-time data analytics: Run real-time streaming data analytics.
Complex stream processing: Create Directed Acyclic Graphs (DAGs) of Kinesis Applications and data streams, with Kinesis applications adding to another Amazon Kinesis stream for further processing, enabling successive stages of stream processing.
stores records of a stream for up to 24 hours, by default, which can be extended to max 7 days
maximum size of a data blob (the data payload before Base64-encoding) within one record is 1 megabyte (MB)
Each shard can support up to 1000 PUT records per second
Each account can provision 10 shards per region, which can be increased further through request
Amazon Kinesis is designed to process streaming big data and the pricing model allows heavy PUTs rate.
Amazon S3 is a cost-effective way to store your data, but not designed to handle a stream of data in real-time
Streams are made of shards and is the base throughput unit of an Kinesis stream.
Each shard provides a capacity of 1MB/sec data input and 2MB/sec data output
Each shard can support up to 1000 PUT records per second
All data is stored for 24 hours.
Replay data inside a 24-hour window
Shards define the capacity limits. If the limits are exceeded, either by data throughput or the number of PUT records, the put data call will be rejected with a ProvisionedThroughputExceeded exception.
This can be handled by
Implementing a retry on the data producer side, if this is due to a temporary rise of the stream’s input data rate
Dynamically scaling the number of shared (resharding) to provide enough capacity for the put data calls to consistently succeed
A record is the unit of data stored in an Amazon Kinesis stream.
A record is composed of a sequence number, partition key, and data blob.
Data blob is the data of interest your data producer adds to a stream.
Maximum size of a data blob (the data payload before Base64-encoding) is 1 MB
Partition key is used to segregate and route records to different shards of a stream.
A partition key is specified by your data producer while adding data to an Amazon Kinesis stream
A sequence number is a unique identifier for each record.
Sequence number is assigned by Amazon Kinesis when a data producer calls PutRecord or PutRecords operation to add data to an Amazon Kinesis stream.
Sequence numbers for the same partition key generally increase over time; the longer the time period between PutRecord or PutRecords requests, the larger the sequence numbers become.
Kinesis Streams enables real-time processing of streaming big data while SQS offers a reliable, highly scalable hosted queue for storing messages and move data between distributed application components
Kinesis provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications while SQS does not guarantee data ordering and provides at least once delivery of messages
Kinesis stores the data up to 24 hours, by default, and can be extended to 7 days while SQS stores the message up to 4 days, by default, and can be configured from 1 minute to 14 days but clears the message once deleted by the consumer
Kineses and SQS both guarantee at-least once delivery of message
Kinesis supports multiple consumers while SQS allows the messages to be delivered to only one consumer at a time and requires multiple queues to deliver message to multiple consumers
Kinesis use cases requirements
Ordering of records.
Ability to consume records in the same order a few hours later
Ability for multiple applications to consume the same stream concurrently
Routing related records to the same record processor (as in streaming MapReduce)
SQS uses cases requirements
Messaging semantics like message-level ack/fail and visibility timeout
Leveraging SQS’s ability to scale transparently
Dynamically increasing concurrency/throughput at read time
Individual message delay, which can be delayed
Kinesis vs S3
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?
AWS Data Pipeline
Amazon Simple Queue Service
You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile devices. The votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use?
Amazon Simple Queue Service
Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets. Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable, elastic and parallel. The results of the analytic processing should be persisted for data mining. Which architecture outlined below will meet the initial requirements for the collection platform?
Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR. (refer link)
Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.
Your customer is willing to consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?
Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs (Can perform real time analysis and stores data for 24 hours which can be extended to 7 days)
Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs (CloudTrail is only for auditing)
Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs (EMR is for batch analysis)
You require the ability to analyze a customer’s clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data?
Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
Write click events directly to Amazon Redshift and then analyze with SQL
Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon RDS and analyze with SQL
Your social media monitoring application uses a Python app running on AWS Elastic Beanstalk to inject tweets, Facebook updates and RSS feeds into an Amazon Kinesis stream. A second AWS Elastic Beanstalk app generates key performance indicators into an Amazon DynamoDB table and powers a dashboard application. What is the most efficient option to prevent any data loss for this application?
Use AWS Data Pipeline to replicate your DynamoDB tables into another region.
Use the second AWS Elastic Beanstalk app to store a backup of Kinesis data onto Amazon Elastic Block Store (EBS), and then create snapshots from your Amazon EBS volumes.
Add a second Amazon Kinesis stream in another Availability Zone and use AWS data pipeline to replicate data across Kinesis streams.
Add a third AWS Elastic Beanstalk app that uses the Amazon Kinesis S3 connector to archive data from Amazon Kinesis into Amazon S3.