AWS Certification – Analytics Services – Cheat Sheet

Kinesis Data Streams – KDS

  • enables real-time processing of streaming data at massive scale
  • provides ordering of records per shard
  • provides an ability to read and/or replay records in the same order
  • allows multiple applications to consume the same data
  • data is replicated across three data centers within a region
  • data is preserved for 24 hours, by default, and can be extended to 7 days
  • data inserted in Kinesis, it can’t be deleted (immutability) but only expires
  • streams can be scaled using multiple shards, based on the partition key
  • each shard provides the capacity of 1MB/sec data input and 2MB/sec data output with 1000 PUT requests per second
  • Kinesis vs SQS
    • real-time processing of streaming big data vs reliable, highly scalable hosted queue for storing messages
    • ordered records, as well as the ability to read and/or replay records in the same order vs no guarantee on data ordering (with the standard queues before the FIFO queue feature was released)
    • data storage up to 24 hours, extended to 7 days vs 1 minute to extended to 14 days but cleared if deleted by the consumer
    • supports multiple consumers vs single consumer at a time and requires multiple queues to deliver message to multiple consumers
  • Kinesis Producer
    • API
      • PutRecord and PutRecords are synchronous
      • PutRecords uses batching and increases throughput
      • might experience ProvisionedThroughputExceeded Exceptions, when sending more data. Use retries with backoff, resharding or change partition key.
    • KPL
      • producer supports synchronous or asynchronous use cases
      • supports inbuilt batching and retry mechanism
    • Kinesis Agent can help monitor log files and send them to KDS
    • supports third party libraries like Spark, Flume, Kafka connect etc.
  • Kinesis Consumers
    • Kinesis SDK
      • Records are polled by consumers from a shard
    • Kinesis Client Library (KCL)
      • Read records from Kinesis produced with the KPL (de-aggregation)
      • supports checkpointing feature to keep track of the application’s state and resume progress using DynamoDB table
      • if KDS application receives provisioned-throughput exceptions, increase the provisioned throughput for the DynamoDB table
    • Kinesis Connector Library – can be replaced using Firehose or Lambda
    • Third party libraries: Spark, Log4J Appenders, Flume, Kafka Connect…
    • Kinesis Firehose, AWS Lambda
    • Kinesis Consumer Enhanced Fan-Out
      • supports  Multiple Consumer applications for the same Stream
      • provides Low Latency ~70ms
      • Higher costs
      • Default limit of 5 consumers using enhanced fan-out per data stream
  • Kinesis Security
    • allows access / authorization control using IAM policies
    • supports Encryption in flight using HTTPS endpoints
    • supports Data encryption at rest either using client side encryption before pushing the data to data streams or server side encryption
    • supports VPC Endpoints to access within VPC

Kinesis Data  Firehose – KDF

  • data transfer solution for delivering real time streaming data to destinations such as S3,  Redshift,  Elasticsearch service, and Splunk.
  • is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration
  • is Near Real Time (min. 60 secs) as it buffers incoming streaming data to a certain size or for a certain period of time before delivering it
  • supports batching, compression, and encryption of the data before loading it, minimizing the amount of storage used at the destination and increasing security
  • supports data compression, minimizing the amount of storage used at the destination. It currently supports GZIP, ZIP, and SNAPPY compression formats. Only GZIP is supported if the data is further loaded to Redshift.
  • supports out of box data transformation as well as custom transformation using Lambda function to transform incoming source data and deliver the transformed data to destinations
  • uses at least once semantics for data delivery.
  • supports multiple producers as datasource, which include Kinesis data stream, KPL, Kinesis Agent, or the Kinesis Data Firehose API using the AWS SDK, CloudWatch Logs, CloudWatch Events, or AWS IoT
  • does NOT support consumers like Spark and KCL
  • supports interface VPC endpoint to keep traffic between the VPC and Kinesis Data Firehose from leaving the Amazon network.

Kinesis Data Streams vs Kinesis Data Firehose

Kinesis Data Analytics

  • helps analyze streaming data, gain actionable insights, and respond to the business and customer needs in real time.
  • reduces the complexity of building, managing, and integrating streaming applications with other AWS service

Redshift

  • Redshift is a fast, fully managed data warehouse
  • provides simple and cost-effective solution to analyze all the data using standard SQL and the existing Business Intelligence (BI) tools.
  • manages the work needed to set up, operate, and scale a data warehouse, from provisioning the infrastructure capacity to automating ongoing administrative tasks such as backups, and patching.
  • automatically monitors your nodes and drives to help you recover from failures.
  • only supports Single-AZ deployments.
  • replicates all the data within the data warehouse cluster when it is loaded and also continuously backs up your data to S3.
  • attempts to maintain at least three copies of your data (the original and replica on the compute nodes and a backup in S3).
  • supports cross-region snapshot replication to another region for disaster recovery
  • Redshift supports four distribution styles; AUTO, EVEN, KEY, or ALL.
    • KEY distribution uses a single column as distribution key (DISTKEY) and helps place matching values on the same node slice
    • Even distribution distributes the rows across the slices in a round-robin fashion, regardless of the values in any particular column
    • ALL distribution replicates whole table in every compute node.
    • AUTO distribution lets Redshift assigns an optimal distribution style based on the size of the table data
  • Redshift supports Compound and Interleaved sort keys
    • Compound key
      • is made up of all of the columns listed in the sort key definition, in the order they are listed and is more efficient when query predicates use a prefix, or query’s filter applies conditions, such as filters and joins, which is a subset of the sort key columns in order.
    • Interleaved sort key
      • gives equal weight to each column in the sort key, so query predicates can use any subset of the columns that make up the sort key, in any order.
      • Not ideal for monotonically increasing attributes
  • Column encodings CANNOT be changed once created.
  • supports query queues for Workload Management, in order to manage concurrency and resource planning. It is a best practice to have separate queues for long running resource-intensive queries and fast queries that don’t require big amounts of memory and CPU
  • Supports Enhanced VPC routing
  • Import/Export Data
    • UNLOAD helps copy data from Redshift table to S3
    • COPY command
      • helps copy data from S3 to Redshift
      • also supports EMR, DynamoDB, remote hosts using SSH
      • parallelized and efficient
      • can decrypt data as it is loaded from S3
      • DON’T use multiple concurrent COPY commands to load one table from multiple files as Redshift is forced to perform a serialized load, which is much slower.
      • supports data decryption when loading data, if data encrypted
      • supports decompressing data, if data is  compressed.
    • Split the Load Data into Multiple Files
    • Load the data in sort key order to avoid needing to vacuum.
    • Use a Manifest File
      • provides Data consistency, to avoid S3 eventual consistency issues
      • helps specify different S3 locations in a more efficient way that with the use of S3 prefixes.
  • Redshift Spectrum
    • helps query and retrieve structured and semistructured data from files in S3 without having to load the data into Redshift tables
    • Redshift Spectrum external tables are read-only. You can’t COPY or INSERT to an external table.

EMR

  • is a web service that utilizes a hosted Hadoop framework running on the web-scale infrastructure of EC2 and S3
  • launches all nodes for a given cluster in the same Availability Zone, which improves performance as it provides higher data access rate
  • seamlessly supports Reserved, On-Demand and Spot Instances
  • consists of Master Node for management and Slave nodes, which consists of Core nodes holding data and Task nodes for performing tasks only
  • is fault tolerant for slave node failures and continues job execution if a slave node goes down
  • does not automatically provision another node to take over failed slaves
  • supports Persistent and Transient cluster types
    • Persistent which continue to run
    • Transient which terminates once the job steps are completed
  • supports EMRFS which allows S3 to be used as a durable HA data storage

Detailed Reading

Glue

  •  fully-managed ETL service that automates the time-consuming steps of data preparation for analytics
  • is serverless and supports pay-as-you-go model.
  • recommends and generates ETL code to transform the source data into target schemas, and runs the ETL jobs on a fully managed, scale-out Apache Spark environment to load your data into its destination.
  • helps setup, orchestrate, and monitor complex data flows.
  • natively supports RDS, Redshift, S3 and databases on EC2 instances.
  • supports server side encryption for data at rest and SSL for data in motion.
  • provides development endpoints to edit, debug, and test the code it generates.
  • AWS Glue Data Catalog
    • is a central repository to store structural and operational metadata for all the data assets.
    • automatically discovers and profiles the data
    • automatically discover both structured and semi-structured data stored in the data lake on S3, Redshift, and other databases
    • provides a unified view of the data that is available for ETL, querying and reporting using services like Athena, EMR, and Redshift Spectrum.
    • Each AWS account has one AWS Glue Data Catalog per region.
  • AWS Glue crawler
    • connects to a data store, progresses through a prioritized list of classifiers to extract the schema of the data and other statistics, and then populates the Glue Data Catalog with this metadata
    • can be scheduled to run periodically so that the metadata is always up-to-date and in-sync with the underlying data.

QuickSight

  • is a very fast, easy-to-use, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from their data, anytime, on any device.
  • delivers fast and responsive query performance by using a robust in-memory engine (SPICE).
    • “SPICE” stands for a Super-fast, Parallel, In-memory Calculation Engine
    • can also be  configured to keep the data in SPICE up-to-date as the data in the underlying sources change.
    • automatically replicates data for high availability and enables QuickSight to scale to support users to perform simultaneous fast interactive analysis across a wide variety of AWS data sources.
  • supports
    • Excel files and flat files like CSV, TSV, CLF, ELF
    • on-premises databases like PostgreSQL, SQL Server and MySQL
    • SaaS applications like Salesforce
    • and AWS data sources such as Redshift, RDS, Aurora, Athena, and S3
  • supports various functions to format and transform the data.
  • supports assorted visualizations that facilitate different analytical approaches:
    • Comparison and distribution – Bar charts (several assorted variants)
    • Changes over time – Line graphs, Area line charts
    • Correlation – Scatter plots, Heat maps
    • Aggregation – Pie graphs, Tree maps
    • Tabular – Pivot tables

Data Pipeline

  • orchestration service that helps define data-driven workflows to automate and schedule regular data movement and data processing activities
  • integrates with on-premises and cloud-based storage systems
  • allows scheduling, retry, and failure logic for the workflows

Elasticsearch

  • Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS Cloud.
  • Elasticsearch provides
    • real-time, distributed search and analytics engine
    • ability to provision all the resources for Elasticsearch cluster and launches the cluster
    • easy to use cluster scaling options. Scaling Elasticsearch Service domain by adding or modifying instances, and storage volumes is an online operation that does not require any downtime.
    • provides self-healing clusters, which automatically detects and replaces failed Elasticsearch nodes, reducing the overhead associated with self-managed infrastructures
    • domain snapshots to back up and restore ES domains and replicate domains across AZs
    • enhanced security with IAM, Network, Domain access policies, and fine-grained access control
    • storage volumes for the data using EBS volumes
    • ability to span cluster nodes across multiple AZs in the same region, known as zone awareness,  for high availability and redundancy.  Elasticsearch Service automatically distributes the primary and replica shards across instances in different AZs.
    • dedicated master nodes to improve cluster stability
    • data visualization using the Kibana tool
    • integration with CloudWatch for monitoring ES domain metrics
    • integration with CloudTrail for auditing configuration API calls to ES domains
    • integration with S3, Kinesis, and DynamoDB for loading streaming data
    • ability to handle structured and Unstructured data
    • supports encryption at rest through KMS, node-to-node encryption over TLS, and the ability to require clients to communicate of HTTPS

AWS Redshift – Certification

AWS Redshift

  • Amazon Redshift is a fully managed, fast and powerful, petabyte scale data warehouse service
  • Redshift automatically helps
    • set up, operate, and scale a data warehouse, from provisioning the infrastructure capacity
    • patches and backs up the data warehouse, storing the backups for a user-defined retention period
    • monitors the nodes and drives to help recovery from failures
    • significantly lowers the cost of a data warehouse, but also makes it easy to analyze large amounts of data very quickly
    • provide fast querying capabilities over structured data using familiar SQL-based clients and business intelligence (BI) tools using standard ODBC and JDBC connections.
    • uses replication and continuous backups to enhance availability and improve data durability and can automatically recover from node and component failures.
    • scale up or down with a few clicks in the AWS Management Console or with a single API call
    • distribute & parallelize queries across multiple physical resources
    • supports VPC, SSL, AES-256 encryption and Hardware Security Modules (HSMs) to protect the data in transit and at rest.
  • Redshift only supports Single-AZ deployments and the nodes are available within the same AZ, if the AZ supports Redshift clusters
  • Redshift provides monitoring using CloudWatch and metrics for compute utilization, storage utilization, and read/write traffic to the cluster are available with the ability to add user-defined custom metrics
  • Redshift provides Audit logging and AWS CloudTrail integration
  • Redshift can be easily enabled to a second region for disaster recovery.

Redshift Architecture

Redshift Performance

  • Massively Parallel Processing (MPP)
    • automatically distributes data and query load across all nodes.
    • makes it easy to add nodes to the data warehouse and enables fast query performance as the data warehouse grows.
  • Columnar Data Storage
    • organizes the data by column, as column-based systems are ideal for data warehousing and analytics, where queries often involve aggregates performed over large data sets
    • columnar data is stored sequentially on the storage media, and require far fewer I/Os, greatly improving query performance
  • Advance Compression
    • Columnar data stores can be compressed much more than row-based data stores because similar data is stored sequentially on disk.
    • employs multiple compression techniques and can often achieve significant compression relative to traditional relational data stores.
    • doesn’t require indexes or materialized views and so uses less space than traditional relational database systems.
    • automatically samples the data and selects the most appropriate compression scheme, when the data is loaded into an empty table

Redshift Single vs Multi-Node Cluster

  • Single Node
    • single node configuration enables getting started quickly and cost-effectively & scale up to a multi-node configuration as the needs grow
  • Multi-Node
    • Multi-node configuration requires a leader node that manages client connections and receives queries, and two or more compute nodes that store data and perform queries and computations.
    • Leader node
      • provisioned automatically and not charged for
      • receives queries from client applications, parses the queries and develops execution plans, which are an ordered set of steps to process these queries.
      • coordinates the parallel execution of these plans with the compute nodes, aggregates the intermediate results from these nodes and finally returns the results back to the client applications.
    • Compute node
      • can contain from 1-128 compute nodes, depending on the node type
      • executes the steps specified in the execution plans and transmit data among themselves to serve these queries.
      • intermediate results are sent back to the leader node for aggregation before being sent back to the client applications.
      • supports Dense Storage or Dense Compute nodes (DC) instance type
        • Dense Storage (DS) allow creation of very large data warehouses using hard disk drives (HDDs) for a very low price point
        • Dense Compute (DC) allow creation of very high performance data warehouses using fast CPUs, large amounts of RAM and solid-state disks (SSDs)
      • direct access to compute nodes is not allowed

Redshift Availability & Durability

  • Redshift replicates the data within the data warehouse cluster and continuously backs up the data to S3 (11 9’s durability)
  • Redshift mirrors each drive’s data to other nodes within the cluster.
  • Redshift will automatically detect and replace a failed drive or node
  • If a drive fails, Redshift
    • cluster will remain available in the event of a drive failure
    • the queries will continue with a slight latency increase while Redshift rebuilds the drive from replica of the data on that drive which is stored on other drives within that node
    • single node clusters do not support data replication and the cluster needs to be restored from snapshot on S3
  • In case of node failure(s), Redshift
    • automatically provisions new node(s) and begins restoring data from other drives within the cluster or from S3
    • prioritizes restoring the most frequently queried data so the most frequently executed queries will become performant quickly
    • cluster will be unavailable for queries and updates until a replacement node is provisioned and added to the cluster
  • In case of Redshift cluster AZ goes down, Redshift
    • cluster is unavailable until power and network access to the AZ are restored
    • cluster’s data is preserved and can be used once AZ becomes available
    • cluster can be restored from any existing snapshots to a new AZ within the same region

Redshift Backup & Restore

  • Redshift replicates all the data within the data warehouse cluster when it is loaded and also continuously backs up the data to S3
  • Redshift always attempts to maintain at least three copies of the data
  • Redshift enables automated backups of the data warehouse cluster with a 1-day retention period, by default, which can be extended to max 35 days
  • Automated backups can be turned off by setting the retention period as 0
  • Redshift can also asynchronously replicate the snapshots to S3 in another region for disaster recovery

Redshift Scalability

  • Redshift allows scaling of the cluster either by
    • increasing the node instance type (Vertical scaling)
    • increasing the number of nodes (Horizontal scaling)
  • Redshift scaling changes are usually applied during the maintenance window or can be applied immediately
  • Redshift scaling process
    • existing cluster remains available for read operations only, while a new data warehouse cluster gets created during scaling operations
    • data from the compute nodes in the existing data warehouse cluster is moved in parallel to the compute nodes in the new cluster
    • when the new data warehouse cluster is ready, the existing cluster will be temporarily unavailable while the canonical name record of the existing cluster is flipped to point to the new data warehouse cluster

Redshift vs EMR vs RDS

  • RDS is ideal for
    • structured data and running traditional relational databases while offloading database administration
    • for online-transaction processing (OLTP) and for reporting and analysis
  • Redshift is ideal for
    • large volumes of structured data that needs to be persisted and queried using standard SQL and existing BI tools
    • analytic and reporting workloads against very large data sets by harnessing the scale and resources of multiple nodes and using a variety of optimizations to provide improvements over RDS
    • preventing reporting and analytic processing from interfering with the performance of the OLTP workload
  • EMR is ideal for
    • processing and transforming unstructured or semi-structured data to bring in to Amazon Redshift and
    • for data sets that are relatively transitory, not stored for long-term use.

[do_widget id=text-15]

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. With which AWS services CloudHSM can be used (select 2)
    1. S3
    2. DynamoDB
    3. RDS
    4. ElastiCache
    5. Amazon Redshift
  2. You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors, which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements?
    1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance (RDS instance will not support data for 2 years)
    2. Ingest data into a DynamoDB table and move old data to a Redshift cluster (Handle 10K IOPS ingestion and store data into Redshift for analysis)
    3. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage (Does not handle the ingestion issue)
    4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS (RDS instance will not support data for 2 years)
  3. Which two AWS services provide out-of-the-box user configurable automatic backup-as-a-service and backup rotation options? Choose 2 answers
    1. Amazon S3
    2. Amazon RDS
    3. Amazon EBS
    4. Amazon Redshift
  4. Your department creates regular analytics reports from your company’s log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data?
    1. Use reduced redundancy storage (RRS) for PDF and CSV data in Amazon S3. Add Spot instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. (Spot instances impacts performance)
    2. Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability)
    3. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift (Spot instances impacts performance)
    4. Use reduced redundancy storage (RRS) for PDF and CSV data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift. (Spot instances impacts performance and Spot instance not available for Redshift)

References