enables real-time processing of streaming data at massive scale
provides ordering of records per shard
provides an ability to read and/or replay records in the same order
allows multiple applications to consume the same data
data is replicated across three data centers within a region
data is preserved for 24 hours, by default, and can be extended to 7 days
data inserted in Kinesis, it can’t be deleted (immutability) but only expires
streams can be scaled using multiple shards, based on the partition key
each shard provides the capacity of 1MB/sec data input and 2MB/sec data output with 1000 PUT requests per second
Kinesis vs SQS
real-time processing of streaming big data vs reliable, highly scalable hosted queue for storing messages
ordered records, as well as the ability to read and/or replay records in the same order vs no guarantee on data ordering (with the standard queues before the FIFO queue feature was released)
data storage up to 24 hours, extended to 7 days vs 1 minute to extended to 14 days but cleared if deleted by the consumer
supports multiple consumers vs single consumer at a time and requires multiple queues to deliver message to multiple consumers
Kinesis Producer
API
PutRecord and PutRecords are synchronous
PutRecords uses batching and increases throughput
might experience ProvisionedThroughputExceeded Exceptions, when sending more data. Use retries with backoff, resharding or change partition key.
KPL
producer supports synchronous or asynchronous use cases
supports inbuilt batching and retry mechanism
Kinesis Agent can help monitor log files and send them to KDS
supports third party libraries like Spark, Flume, Kafka connect etc.
Kinesis Consumers
Kinesis SDK
Records are polled by consumers from a shard
Kinesis Client Library (KCL)
Read records from Kinesis produced with the KPL (de-aggregation)
supports checkpointing feature to keep track of the application’s state and resume progress using DynamoDB table
if KDS application receives provisioned-throughput exceptions, increase the provisioned throughput for the DynamoDB table
Kinesis Connector Library – can be replaced using Firehose or Lambda
Third party libraries: Spark, Log4J Appenders, Flume, Kafka Connect…
Kinesis Firehose, AWS Lambda
Kinesis Consumer Enhanced Fan-Out
supports Multiple Consumer applications for the same Stream
provides Low Latency ~70ms
Higher costs
Default limit of 5 consumers using enhanced fan-out per data stream
Kinesis Security
allows access / authorization control using IAM policies
supports Encryption in flight using HTTPS endpoints
supports Data encryption at rest either using client side encryption before pushing the data to data streams or server side encryption
data transfer solution for delivering real time streaming data to destinations such as S3, Redshift, Elasticsearch service, and Splunk.
is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration
is Near Real Time (min. 60 secs) as it buffers incoming streaming data to a certain size or for a certain period of time before delivering it
supports batching, compression, and encryption of the data before loading it, minimizing the amount of storage used at the destination and increasing security
supports data compression, minimizing the amount of storage used at the destination. It currently supports GZIP, ZIP, and SNAPPY compression formats. Only GZIP is supported if the data is further loaded to Redshift.
supports out of box data transformation as well as custom transformationusing Lambda function to transform incoming source data and deliver the transformed data to destinations
uses at least once semantics for data delivery.
supports multiple producers as datasource, which include Kinesis data stream, KPL, Kinesis Agent, or the Kinesis Data Firehose API using the AWS SDK, CloudWatch Logs, CloudWatch Events, or AWS IoT
does NOT support consumers like Spark and KCL
supports interface VPC endpoint to keep traffic between the VPC and Kinesis Data Firehose from leaving the Amazon network.
Kinesis Data Streams vs Kinesis Data Firehose
Kinesis Data Analytics
helps analyze streaming data, gain actionable insights, and respond to the business and customer needs in real time.
reduces the complexity of building, managing, and integrating streaming applications with other AWS service
provides simple and cost-effective solution to analyze all the data using standard SQL and the existing Business Intelligence (BI) tools.
manages the work needed to set up, operate, and scale a data warehouse, from provisioning the infrastructure capacity to automating ongoing administrative tasks such as backups, and patching.
automatically monitors your nodes and drives to help you recover from failures.
only supports Single-AZ deployments.
replicates all the data within the data warehouse cluster when it is loaded and also continuously backs up your data to S3.
attempts to maintain at least three copies of your data (the original and replica on the compute nodes and a backup in S3).
supports cross-region snapshot replication to another region for disaster recovery
is made up of all of the columns listed in the sort key definition, in the order they are listed and is more efficient when query predicates use a prefix, or query’s filter applies conditions, such as filters and joins, which is a subset of the sort key columns in order.
Interleaved sort key
gives equal weight to each column in the sort key, so query predicates can use any subset of the columns that make up the sort key, in any order.
Not ideal for monotonically increasing attributes
Column encodings CANNOT be changed once created.
supports query queues for Workload Management, in order to manage concurrency and resource planning. It is a best practice to have separate queues for long running resource-intensive queries and fast queries that don’t require big amounts of memory and CPU
also supports EMR, DynamoDB, remote hosts using SSH
parallelized and efficient
can decrypt data as it is loaded from S3
DON’T use multiple concurrent COPY commands to load one table from multiple files as Redshift is forced to perform a serialized load, which is much slower.
supports data decryption when loading data, if data encrypted
supports decompressing data, if data is compressed.
Split the Load Data into Multiple Files
Load the data in sort key order to avoid needing to vacuum.
Use a Manifest File
provides Data consistency, to avoid S3 eventual consistency issues
helps specify different S3 locations in a more efficient way that with the use of S3 prefixes.
fully-managed ETL service that automates the time-consuming steps of data preparation for analytics
is serverless and supports pay-as-you-go model.
recommends and generates ETL code to transform the source data into target schemas, and runs the ETL jobs on a fully managed, scale-out Apache Spark environment to load your data into its destination.
helps setup, orchestrate, and monitor complex data flows.
natively supports RDS, Redshift, S3 and databases on EC2 instances.
supports server side encryption for data at rest and SSL for data in motion.
provides development endpoints to edit, debug, and test the code it generates.
AWS Glue Data Catalog
is a central repository to store structural and operational metadata for all the data assets.
automatically discovers and profiles the data
automatically discover both structured and semi-structured data stored in the data lake on S3, Redshift, and other databases
provides a unified view of the data that is available for ETL, querying and reporting using services like Athena, EMR, and Redshift Spectrum.
Each AWS account has one AWS Glue Data Catalog per region.
AWS Glue crawler
connects to a data store, progresses through a prioritized list of classifiers to extract the schema of the data and other statistics, and then populates the Glue Data Catalog with this metadata
can be scheduled to run periodically so that the metadata is always up-to-date and in-sync with the underlying data.
is a very fast, easy-to-use, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from their data, anytime, on any device.
delivers fast and responsive query performance by using a robust in-memory engine (SPICE).
“SPICE” stands for a Super-fast, Parallel, In-memory Calculation Engine
can also be configured to keep the data in SPICE up-to-date as the data in the underlying sources change.
automatically replicates data for high availability and enables QuickSight to scale to support users to perform simultaneous fast interactive analysis across a wide variety of AWS data sources.
supports
Excel files and flat files like CSV, TSV, CLF, ELF
on-premises databases like PostgreSQL, SQL Server and MySQL
SaaS applications like Salesforce
and AWS data sources such as Redshift, RDS, Aurora, Athena, and S3
supports various functions to format and transform the data.
supports assorted visualizations that facilitate different analytical approaches:
Comparison and distribution – Bar charts (several assorted variants)
Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS Cloud.
Elasticsearch provides
real-time, distributed search and analytics engine
ability to provision all the resources for Elasticsearch cluster and launches the cluster
easy to use cluster scaling options. Scaling Elasticsearch Service domain by adding or modifying instances, and storage volumes is an online operation that does not require any downtime.
provides self-healing clusters, which automatically detects and replaces failed Elasticsearch nodes, reducing the overhead associated with self-managed infrastructures
domain snapshots to back up and restore ES domains and replicate domains across AZs
enhanced security with IAM, Network, Domain access policies, and fine-grained access control
storage volumes for the data using EBS volumes
ability to span cluster nodes across multiple AZs in the same region, known as zone awareness, for high availability and redundancy. Elasticsearch Service automatically distributes the primary and replica shards across instances in different AZs.
dedicated master nodes to improve cluster stability
data visualization using the Kibana tool
integration with CloudWatch for monitoring ES domain metrics
integration with CloudTrail for auditing configuration API calls to ES domains
integration with S3, Kinesis, and DynamoDB for loading streaming data
ability to handle structured and Unstructured data
supports encryption at rest through KMS, node-to-node encryption over TLS, and the ability to require clients to communicate of HTTPS
Amazon EMR is a web service that utilizes a hosted Hadoop framework running on the web-scale infrastructure of EC2 and S3
EMR enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data
EMR
uses Apache Hadoop as its distributed data processing engine, which is an open source, Java software that supports data-intensive distributed applications running on large clusters of commodity hardware
is ideal for problems that necessitate fast and efficient processing of large amounts of data
lets the focus be on crunching or analyzing big data without having to worry about time-consuming set-up, management or tuning of Hadoop clusters or the compute capacity
can help perform data-intensive tasks for applications such as web indexing, data mining, log file analysis, machine learning, financial analysis, scientific simulation, and bioinformatics research etc
provides web service interface to launch the clusters and monitor processing-intensive computation on clusters
is a batch-processing framework that measures the common processing time duration in hours to days, if the use case is to have processing at real time or within minutes Apache Spark or Storm would be a better option
EMR seamlessly supports On-Demand, Spot, and Reserved Instances
EMR launches all nodes for a given cluster in the same EC2 Availability Zone, which improves performance as it provides higher data access rate
EMR supports different EC2 instance types including Standard, High CPU, High Memory, Cluster Compute, High I/O, and High Storage
Standard Instances have memory to CPU ratios suitable for most general-purpose applications.
High CPU instances have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications
High Memory instances offer large memory sizes for high throughput applications
Cluster Compute instances have proportionally high CPU with increased network performance and are well suited for High Performance Compute (HPC) applications and other demanding network-bound applications
High Storage instances offer 48 TB of storage across 24 disks and are ideal for applications that require sequential access to very large data sets such as data warehousing and log processing
EMR charges on hourly increments i.e. once the cluster is running, charges apply entire hour
EMR integrates with CloudTrail to record AWS API calls
NOTE: Topic mainly for Solution Architect Professional & Data Analytics – Speciality Exam Only
EMR Architecture
Amazon EMR uses industry proven, fault-tolerant Hadoop software as its data processing engine
Hadoop is an open source, Java software that supports data-intensive distributed applications running on large clusters of commodity hardware
Hadoop splits the data into multiple subsets and assigns each subset to more than one EC2 instance. So, if an EC2 instance fails to process one subset of data, the results of another Amazon EC2 instance can be used
EMR consists of Master node, one or more Slave nodes
Master Node
EMR currently does not support automatic failover of the master nodes or master node state recovery
If master node goes down, the EMR cluster will be terminated and the job needs to be re-executed
Slave Nodes – Core nodes and Task nodes
Core nodes
host persistent data using Hadoop Distributed File System (HDFS) and run Hadoop tasks
can be increased in an existing cluster
Task nodes
only run Hadoop tasks
can be increased or decreased in an existing cluster
EMR is fault tolerant for slave failures and continues job execution if a slave node goes down.
Currently, EMR does not automatically provision another node to take over failed slaves
EMR supports Bootstrap actions which allow
users a way to run custom set-up prior to the execution of the cluster.
can be used to install software or configure instances before running the cluster
EMR Security
EMR cluster starts with different security groups for Master and Slaves
Master security group
has a port open for communication with the service.
has a SSH port open to allow direct SSH into the instances, using the key specified at startup
Slave security group
only allows interaction with the master instance
SSH to the slave nodes can be done by doing SSH to the master node and then to the slave node
Security groups can be configured with different access rules
EMR enables use of security configuration
which helps to encrypt data at-rest, data in-transit, or both
can be used to specify settings for S3 encryption with EMR file system (EMRFS), local disk encryption, and in-transit encryption
is stored in EMR rather than the cluster configuration making it reusable
gives flexibility to choose from several options, including keys managed by AWS KMS, keys managed by S3, and keys and certificates from custom providers that you supply
At-rest Encryption for S3 with EMRFS
EMRFS supports Server-side (SSE-S3, SSE-KMS) and Client-side encryption (CSE-KMS or CSE-Custom)
S3 SSE and CSE encryption with EMRFS are mutually exclusive; either one can be selected but not both
Transport layer security (TLS) encrypts EMRFS objects in-transit between EMR cluster nodes & S3
At-rest Encryption for Local Disks
Open-source HDFS Encryption
HDFS exchanges data between cluster instances during distributed processing, and also reads from and writes data to instance store volumes and the EBS volumes attached to instances
Open-source Hadoop encryption options are activated
Secure Hadoop RPC is set to “Privacy”, which uses Simple Authentication Security Layer (SASL).
Data encryption on HDFS block data transfer is set to true and is configured to use AES 256 encryption.
LUKS. In addition to HDFS encryption, the Amazon EC2 instance store volumes (except boot volumes) and the attached Amazon EBS volumes of cluster instances are encrypted using LUKS
In-Transit Data Encryption
Encryption artifacts used for in-transit encryption in one of two ways:
either by providing a zipped file of certificates that you upload to S3,
or by referencing a custom Java class that provides encryption artifacts
EMR Cluster Types
EMR has two cluster types, transient and persistent
Transient EMR Clusters
Transient EMR clusters are clusters that shut down when the job or the steps (series of jobs) are complete
Transient EMT clusters can be used in situations
where total number of EMR processing hours per day < 24 hours and its beneficial to shut down the cluster when it’s not being used.
using HDFS as your primary data storage.
job processing is intensive, iterative data processing.
Persistent EMR Clusters
Persistent EMR clusters continue to run after the data processing job is complete
Persistent EMR clusters can be used in situations
frequently run processing jobs where it’s beneficial to keep the cluster running after the previous job.
processing jobs have an input-output dependency on one another.
In rare cases when it is more cost effective to store the data on HDFS instead of S3
EMR Best Practices
Data Migration
Two tools – S3DistCp and DistCp – can be used to move data stored on the local (data center) HDFS storage to S3, from S3 to HDFS and between S3 and local disk (non HDFS) to S3
AWS Import/Export and Direct Connect can also be considered for moving data
Data Collection
Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, & moving large amounts of log data
Flume agents can be installed on the data sources (web-servers, app servers etc) and data shipped to the collectors which can then be stored in persistent storage like S3 or HDFS
Data Aggregation
Data aggregation refers to techniques for gathering individual data records (for e.g. log records) and combining them into a large bundle of data files i.e. creating a large file from small files
Hadoop, on which EMR runs, generally performs better with fewer large files compared to many small files
Hadoop splits the file on HDFS on multiple nodes, while for the data in S3 it uses the HTTP Range header query to split the files which helps improve performance by supporting parallelization
Log collectors like Flume and Fluentd can be used to aggregate data before copying it to the final destination (S3 or HDFS)
Data aggregation has following benefits
Improves data ingest scalability by reducing the number of times needed to upload data to AWS
Reduces the number of files stored on S3 (or HDFS), which inherently helps provide better performance when processing data
Provides a better compression ratio as compressing large, highly compressible files is often more effective than compressing a large number of smaller files.
Data compression
Data compression can be used at the input as well as intermediate outputs from the mappers
Data compression helps
Lower storage costs
Lower bandwidth cost for data transfer
Better data processing performance by moving less data between data storage location, mappers, and reducers
Better data processing performance by compressing the data that EMR writes to disk, i.e. achieving better performance by writing to disk less frequently
Data Compression can have an impact on Hadoop data splitting logic as some of the compression techniques like gzip do not support it
Data Partitioning
Data partitioning helps in data optimizations and lets you create unique buckets of data and eliminate the need for a data processing job to read the entire data set
Data can be partitioned by
Data type (time series)
Data processing frequency (per hour, per day, etc.)
Data access and query pattern (query on time vs. query on geo location)
Cost Optimization
AWS offers different pricing models for EC2 instances
On-Demand instances
are a good option if using transient EMR jobs or if the EMR hourly usage is less than 17% of the time
Reserved instances
are a good option for persistent EMR cluster or if the EMR hourly usage is more than 17% of the time as is more cost effective
Spot instances
can be a cost effective mechanism to add compute capacity
can be used where the data is persists on S3
can be used to add extra task capacity with Task nodes, and
is not suited for Master node, as if it is lost the cluster is lost and Core nodes (data nodes) as they host data and if lost needs to be recovered to rebalance the HDFS cluster
Architecture pattern can be used,
Run master node on On-Demand or Reserved Instances (if running persistent EMR clusters).
Run a portion of the EMR cluster on core nodes using On-Demand or Reserved Instances and
the rest of the cluster on task nodes using Spot Instances.
EMR – S3 vs HDFS
Storing data on S3 provides several benefits
inherent features high availability, durability, lifecycle management, data encryption and archival of data to Glacier
cost effective as storing data in S3 is cheaper as compared to HDFS with the replication factor
ability to use Transient EMR cluster and shutdown the clusters after the job is completed, with data being maintained in S3
ability to use Spot instances and not having to worry about losing the spot instances any time
provides data durability from any HDFS node failures, where node failures exceed the HDFS replication factor
data ingestion with high throughput data stream to S3 is much easier than ingesting to HDFS
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
You require the ability to analyze a large amount of data, which is stored on Amazon S3 using Amazon Elastic Map Reduce. You are using the cc2.8xlarge instance type, who’s CPUs are mostly idle during processing. Which of the below would be the most cost efficient way to reduce the runtime of the job? [PROFESSIONAL]
Create smaller files on Amazon S3.
Add additional cc2.8xlarge instances by introducing a task group.
Use smaller instances that have higher aggregate I/O performance.
Create fewer, larger files on Amazon S3.
A customer’s nightly EMR job processes a single 2-TB data file stored on Amazon Simple Storage Service (S3). The Amazon Elastic Map Reduce (EMR) job runs on two On-Demand core nodes and three On-Demand task nodes. Which of the following may help reduce the EMR job completion time? Choose 2 answers
Use three Spot Instances rather than three On-Demand instances for the task nodes.
Change the input split size in the MapReduce job configuration.
Use a bootstrap action to present the S3 bucket as a local filesystem.
Launch the core nodes and task nodes within an Amazon Virtual Cloud.
Adjust the number of simultaneous mapper tasks.
Enable termination protection for the job flow.
Your department creates regular analytics reports from your company’s log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data? [PROFESSIONAL]
Use reduced redundancy storage (RRS) for PDF and CSV data in Amazon S3. Add Spot instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. (Only Spot instances impacts performance)
Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability)
Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift (Only Spot instances impacts performance)
Use reduced redundancy storage (RRS) for PDF and CSV data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift. (Spot instances impacts performance and Spot instance not available for Redshift)
A research scientist is planning for the one-time launch of an Elastic MapReduce cluster and is encouraged by her manager to minimize the costs. The cluster is designed to ingest 200TB of genomics data with a total of 100 Amazon EC2 instances and is expected to run for around four hours. The resulting data set must be stored temporarily until archived into an Amazon RDS Oracle instance. Which option will help save the most money while meeting requirements? [PROFESSIONAL]
Store ingest and output files in Amazon S3. Deploy on-demand for the master and core nodes and spot for the task nodes.
Optimize by deploying a combination of on-demand, RI and spot-pricing models for the master, core and task nodes. Store ingest and output files in Amazon S3 with a lifecycle policy that archives them to Amazon Glacier. (Master and Core must be RI or On Demand. Cannot be Spot)
Store the ingest files in Amazon S3 RRS and store the output files in S3. Deploy Reserved Instances for the master and core nodes and on-demand for the task nodes. (Need better durability for ingest file. Spot instances can be used for task nodes for cost saving. RI will not provide cost saving in this case)
Deploy on-demand master, core and task nodes and store ingest and output files in Amazon S3 RRS (Input should be in S3 standard, as re-ingesting the input data might end up being more costly then holding the data for limited time in standard S3)
Your company sells consumer devices and needs to record the first activation of all sold devices. Devices are not activated until the information is written on a persistent database. Activation data is very important for your company and must be analyzed daily with a MapReduce job. The execution time of the data analysis process must be less than three hours per day. Devices are usually sold evenly during the year, but when a new device model is out, there is a predictable peak in activation’s, that is, for a few days there are 10 times or even 100 times more activation’s than in average day. Which of the following databases and analysis framework would you implement to better optimize costs and performance for this workload? [PROFESSIONAL]
Amazon RDS and Amazon Elastic MapReduce with Spot instances.
Amazon DynamoDB and Amazon Elastic MapReduce with Spot instances.
Amazon RDS and Amazon Elastic MapReduce with Reserved instances.
Amazon DynamoDB and Amazon Elastic MapReduce with Reserved instances