AWS Kinesis Data Streams vs SQS

Kinesis Data Streams vs SQS

Kinesis Data Streams vs SQS

Purpose

  • Amazon Kinesis Data Streams
    • allows real-time processing of streaming big data and the ability to read and replay records to multiple Amazon Kinesis Applications.
    • Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications that read from the same Amazon Kinesis stream (for example, to perform counting, aggregation, and filtering).
  • Amazon SQS
    • offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices.
    • It moves data between distributed application components and helps  decouple these components.
    • provides common middleware constructs such as dead-letter queues and poison-pill management.
    • provides a generic web services API and can be accessed by any programming language that the AWS SDK supports.
    • supports both standard and FIFO queues

Scaling

  • Kinesis Data streams is not fully managed and requires manual provisioning and scaling by increasing shards
  • SQS is fully managed, highly scalable and requires no administrative overhead and little configuration

Ordering

  • Kinesis provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Kinesis Applications
  • SQS Standard Queue does not guarantee data ordering and provides at least once delivery of messages
  • SQS FIFO Queue guarantees data ordering within the message group

Data Retention Period

  • Kinesis Data Streams stores the data for up to 24 hours, by default, and can be extended to 365 days
  • SQS stores the message for up to 4 days, by default, and can be configured from 1 minute to 14 days but clears the message once deleted by the consumer

Delivery Semantics

  • Kinesis and SQS Standard Queue both guarantee at least one delivery of the message.
  • SQS FIFO Queue guarantees Exactly once delivery

Parallel Clients

  • Kinesis supports multiple consumers
  • SQS allows the messages to be delivered to only one consumer at a time and requires multiple queues to deliver messages to multiple consumers

Use Cases

  • Kinesis use cases requirements
    • Ordering of records.
    • Ability to consume records in the same order a few hours later
    • Ability for multiple applications to consume the same stream concurrently
    • Routing related records to the same record processor (as in streaming MapReduce)
  • SQS uses cases requirements
    • Messaging semantics like message-level ack/fail and visibility timeout
    • Leveraging SQS’s ability to scale transparently
    • Dynamically increasing concurrency/throughput at read time
    • Individual message delay, which can be delayed

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?
    1. Amazon Kinesis
    2. AWS Data Pipeline
    3. Amazon AppStream
    4. Amazon Simple Queue Service
  2. Your customer is willing to consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?
    1. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
    2. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs (Can perform real time analysis and stores data for 24 hours which can be extended to 7 days)
    3. Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs (CloudTrail is only for auditing)
    4. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs (EMR is for batch analysis)

References

Kinesis Data Streams – Comparision with other services

AWS Kinesis Data Streams – KDS

AWS Kinesis Data Streams – KDS

  • Amazon Kinesis Data Streams is a streaming data service that enables real-time processing of streaming data at a massive scale.
  • Kinesis Streams enables building of custom applications that process or analyze streaming data for specialized needs.
  • Kinesis Streams features
    • handles provisioning, deployment, ongoing-maintenance of hardware, software, or other services for the data streams.
    • manages the infrastructure, storage, networking, and configuration needed to stream the data at the level of required data throughput.
    • synchronously replicates data across three AZs in an AWS Region, providing high availability and data durability.
    • stores records of a stream for up to 24 hours, by default, from the time they are added to the stream. The limit can be raised to up to 7 days by enabling extended data retention.
  • Data such as clickstreams, application logs, social media, etc can be added from multiple sources and within seconds is available for processing to the Kinesis Applications.
  • Kinesis provides the ordering of records, as well as the ability to read and/or replay records in the same order to multiple applications.
  • Kinesis is designed to process streaming big data and the pricing model allows heavy PUTs rate.
  • Multiple Kinesis Data Streams applications can consume data from a stream, so that multiple actions, like archiving and processing, can take place concurrently and independently.
  • Kinesis Data Streams application can start consuming the data from the stream almost immediately after the data is added and put-to-get delay is typically less than 1 second.
  • Kinesis Streams is useful for rapidly moving data off data producers and then continuously processing the data, be it to transform the data before emitting to a data store, run real-time metrics and analytics, or derive more complex data streams for further processing
    • Accelerated log and data feed intake: Data producers can push data to Kinesis stream as soon as it is produced, preventing any data loss and making it available for processing within seconds.
    • Real-time metrics and reporting: Metrics can be extracted and used to generate reports from data in real-time.
    • Real-time data analytics: Run real-time streaming data analytics.
    • Complex stream processing: Create Directed Acyclic Graphs (DAGs) of Kinesis Applications and data streams, with Kinesis applications adding to another Amazon Kinesis stream for further processing, enabling successive stages of stream processing.
  • Kinesis limits
    • stores records of a stream for up to 24 hours, by default, which can be extended to max 365 days
    • maximum size of a data blob (the data payload before Base64-encoding) within one record is 1 megabyte (MB)
    • Each shard can support up to 1000 PUT records per second.
  • S3 is a cost-effective way to store the data, but not designed to handle a stream of data in real-time

Kinesis Data Streams Terminology

 

Kinesis Architecture

 

  • Data Record
    • A record is the unit of data stored in a Kinesis data stream.
    • A record is composed of a sequence number, partition key, and data blob, which is an immutable sequence of bytes.
    • Maximum size of a data blob is 1 MB
    • Partition key
      • Partition key is used to segregate and route records to different shards of a stream.
      • A partition key is specified by the data producer while adding data to a Kinesis stream.
    • Sequence number
      • A sequence number is a unique identifier for each record.
      • Kinesis assigns a Sequence number, when a data producer calls PutRecord or PutRecords operation to add data to a stream.
      • Sequence numbers for the same partition key generally increase over time; the longer the time period between PutRecord or PutRecords requests, the larger the sequence numbers become.
  • Data Stream
    • Data stream represents a group of data records.
    • Data records in a data stream are distributed into shards.
  • Shard
    • A shard is a uniquely identified sequence of data records in a stream.
    • Streams are made of shards and are the base throughput unit of a Kinesis stream, as pricing is per shard basis.
    • Each shard supports up to 5 transactions per second for reads, up to a maximum total data read rate of 2 MB per second, and up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second (including partition keys)
    • Each shard provides a fixed unit of capacity. If the limits are exceeded, either by data throughput or the number of PUT records, the put data call will be rejected with a ProvisionedThroughputExceeded exception.
    • This can be handled by
      • Implementing a retry on the data producer side, if this is due to a temporary rise of the stream’s input data rate
      • Dynamically scaling the number of shared (resharding) to provide enough capacity for the put data calls to consistently succeed
  • Capacity Mode
    • A data stream capacity mode determines the pricing and how the capacity is managed
    • Kinesis Data Streams currently supports an on-demand mode and a provisioned mode
      • On-demand mode,
        • KDS automatically manages the shards in order to provide the necessary throughput.
        • You are charged only for the actual throughput used and KDS automatically accommodates the workloads’ throughput needs as they ramp up or down.
      • Provisioned mode
        • Number of shards for the data stream must be specified.
        • Total capacity of a data stream is the sum of the capacities of its shards.
        • Shards can be increased or decreased in a data stream as needed and you are charged for the number of shards at an hourly rate.
  • Retention Period
    • All data is stored for 24 hours, by default, and can be increased to 8760 hours (365 days) 168 hours (7 days) maximum.
  • Producers
    • A producer puts data records into Kinesis data streams. 
    • To put data into the stream, the name of the stream, a partition key, and the data blob to be added to the stream should be specified.
    • Partition key is used to determine which shard in the stream the data record is added to.
  • Consumers
    • A consumer is an application built to read and process data records from Kinesis data streams.

Kinesis Security

  • supports Server-side encryption using Key Management Service (KMS) for encrypting the data at rest.
  • supports writing encrypted data to a data stream by encrypting and decrypting on the client side.
  • supports encryption in transit using HTTPS endpoints.
  • supports Interface VPC endpoint to keep traffic between VPC and Kinesis Data Streams from leaving the Amazon network. Interface VPC endpoints don’t require an IGW, NAT device, VPN connection, or Direct Connect.
  • integrated with IAM to control access to Kinesis Data Streams resources.
  • integrated with CloudTrail, which provides a record of actions taken by a user, role, or an AWS service in Kinesis Data Streams.

Kinesis Producer

Data to Kinesis Data Streams can be added via API/SDK (PutRecord and PutRecords) operations, Kinesis Producer Library (KPL), or Kinesis Agent.

  • API
    • PutRecordPutRecords operations are synchronous operation that sends single/multiple records to the stream per HTTP request.
    • use PutRecords to achieve a higher throughput per data producer
    • helps manage many aspects of Kinesis Data Streams (including creating streams, resharding, and putting and getting records)
  • Kinesis Agent
    • is a pre-built Java application that offers an easy way to collect and send data to the Kinesis stream.
    • can be installed on Linux-based server environments such as web servers, log servers, and database servers
    • can be configured to monitor certain files on the disk and then continuously send new data to the Kinesis stream
  • Kinesis Producer Library (KPL)
    • is an easy-to-use and highly configurable library that helps to put data into a Kinesis stream.
    • provides a layer of abstraction specifically for ingesting data
    • presents a simple, asynchronous, and reliable interface that helps achieve high producer throughput with minimal client resources.
    • batches messages, as it aggregates records to increase payload size and improve throughput.
    • Collects records and uses PutRecords to write multiple records to multiple shards per request 
    • Writes to one or more Kinesis data streams with an automatic and configurable retry mechanism.
    • Integrates seamlessly with the Kinesis Client Library (KCL) to de-aggregate batched records on the consumer
    • Submits CloudWatch metrics to provide visibility into performance
  • Third Party and Open source
    • Log4j appender
    • Apache Kafka
    • Flume, fluentd, etc.

Kinesis Consumers

  • Kinesis Application is a data consumer that reads and processes data from a Kinesis Data Stream and can be built using either Kinesis API or Kinesis Client Library (KCL)
  • Shards in a stream provide 2 MB/sec of read throughput per shard, by default, which is shared by all the consumers reading from a given shard.
  • Kinesis Client Library (KCL)
    • is a pre-built library with multiple language support
    • delivers all records for a given partition key to same record processor
    • makes it easier to build multiple applications reading from the same stream for e.g. to perform counting, aggregation, and filtering
    • handles complex issues such as adapting to changes in stream volume, load-balancing streaming data, coordinating distributed services, and processing data with fault-tolerance
    • uses a unique DynamoDB table to keep track of the application’s state, so if the Kinesis Data Streams application receives provisioned-throughput exceptions, increase the provisioned throughput for the DynamoDB table
  • Kinesis Connector Library
    • is a pre-built library that helps you easily integrate Kinesis Streams with other AWS services and third-party tools.
    • Kinesis Client Library is required for Kinesis Connector Library
    • is legacy and can be replaced by Lambda or Kinesis Data Firehose
  • Kinesis Storm Spout is a pre-built library that helps you easily integrate Kinesis Streams with Apache Storm
  • AWS Lambda, Kinesis Data Firehose, and Kinesis Data Analytics also act as consumers for Kinesis Data Streams

Kinesis Enhanced fan-out

  • allows customers to scale the number of consumers reading from a data stream in parallel, while maintaining high performance and without contending for read throughput with other consumers.
  • provides logical 2 MB/sec throughput pipes between consumers and shards for Kinesis Data Streams Consumers.

AWS Kinesis Shared Throughput vs Enhanced Fan-out

Kinesis Data Streams Sharding

  • Resharding helps to increase or decrease the number of shards in a stream in order to adapt to changes in the rate of data flowing through the stream.
  • Resharding operations support shard split and shard merge.
    • Shard split helps divide a single shard into two shards. It increases the capacity and the cost.
    • Shard merge helps combine two shards into a single shard. It reduces the capacity and the cost.
  • Resharding is always pairwise and always involves two shards.
  • The shard or pair of shards that the resharding operation acts on are referred to as parent shards. The shard or pair of shards that result from the resharding operation are referred to as child shards.
  • Kinesis Client Library tracks the shards in the stream using a DynamoDB table and discovers the new shards and populates new rows in the table.
  • KCL ensures that any data that existed in shards prior to the resharding is processed before the data from the new shards, thereby, preserving the order in which data records were added to the stream for a particular partition key.
  • Data records in the parent shard are accessible from the time they are added to the stream to the current retention period.

Kinesis Data Streams vs Kinesis Firehose

Refer post @ Kinesis Data Streams vs Kinesis Firehose

Kinesis Data Streams vs. Firehose

Kinesis Data Streams vs SQS

Refer post @ Kinesis Data Streams vs SQS

Kinesis vs S3

Amazon Kinesis vs S3

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?
    1. Amazon Kinesis
    2. AWS Data Pipeline
    3. Amazon AppStream
    4. Amazon Simple Queue Service
  2. You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile devices. The votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use?
    1. Amazon DynamoDB
    2. Amazon Redshift
    3. Amazon Kinesis
    4. Amazon Simple Queue Service
  3. Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets. Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable, elastic and parallel. The results of the analytic processing should be persisted for data mining. Which architecture outlined below will meet the initial requirements for the collection platform?
    1. Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
    2. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR. (refer link)
    3. Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
    4. Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.
  4. Your customer is willing to consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?
    1. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
    2. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs (Can perform real time analysis and stores data for 24 hours which can be extended to 7 days)
    3. Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs (CloudTrail is only for auditing)
    4. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs (EMR is for batch analysis)
  5. You require the ability to analyze a customer’s clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data?
    1. Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
    2. Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
    3. Write click events directly to Amazon Redshift and then analyze with SQL
    4. Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon RDS and analyze with SQL
  6. Your social media monitoring application uses a Python app running on AWS Elastic Beanstalk to inject tweets, Facebook updates and RSS feeds into an Amazon Kinesis stream. A second AWS Elastic Beanstalk app generates key performance indicators into an Amazon DynamoDB table and powers a dashboard application. What is the most efficient option to prevent any data loss for this application?
    1. Use AWS Data Pipeline to replicate your DynamoDB tables into another region.
    2. Use the second AWS Elastic Beanstalk app to store a backup of Kinesis data onto Amazon Elastic Block Store (EBS), and then create snapshots from your Amazon EBS volumes.
    3. Add a second Amazon Kinesis stream in another Availability Zone and use AWS data pipeline to replicate data across Kinesis streams.
    4. Add a third AWS Elastic Beanstalk app that uses the Amazon Kinesis S3 connector to archive data from Amazon Kinesis into Amazon S3.
  7. You need to replicate API calls across two systems in real time. What tool should you use as a buffer and transport mechanism for API call events?
    1. AWS SQS
    2. AWS Lambda
    3. AWS Kinesis (AWS Kinesis is an event stream service. Streams can act as buffers and transport across systems for in-order programmatic events, making it ideal for replicating API calls across systems)
    4. AWS SNS
  8. You need to perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first?
    1. Kinesis Firehose + RDS
    2. Kinesis Firehose + RedShift (Kinesis Firehose provides a managed service for aggregating streaming data and inserting it into RedShift. RedShift also supports ad-hoc queries over well-structured data using a SQL-compliant wire protocol, so the business team should be able to adopt this system easily. Refer link)
    3. EMR using Hive
    4. EMR running Apache Spark

References

AWS Redshift

Redshift Architecture

AWS Redshift

  • Amazon Redshift is a fully managed, fast, and powerful, petabyte-scale data warehouse service.
  • Redshift is an OLAP data warehouse solution based on PostgreSQL.
  • Redshift automatically helps
    • set up, operate, and scale a data warehouse, from provisioning the infrastructure capacity.
    • patches and backs up the data warehouse, storing the backups for a user-defined retention period.
    • monitors the nodes and drives to help recovery from failures.
    • significantly lowers the cost of a data warehouse, but also makes it easy to analyze large amounts of data very quickly
    • provide fast querying capabilities over structured and semi-structured data using familiar SQL-based clients and business intelligence (BI) tools using standard ODBC and JDBC connections.
    • uses replication and continuous backups to enhance availability and improve data durability and can automatically recover from node and component failures.
    • scale up or down with a few clicks in the AWS Management Console or with a single API call
    • distributes & parallelize queries across multiple physical resources
    • supports VPC, SSL, AES-256 encryption, and Hardware Security Modules (HSMs) to protect the data in transit and at rest.
  • Redshift supported only Single-AZ deployments before and the nodes are available within the same AZ, if the AZ supports Redshift clusters. However, Multi-AZ deployments are now supported for some types.
  • Redshift provides monitoring using CloudWatch and metrics for compute utilization, storage utilization, and read/write traffic to the cluster are available with the ability to add user-defined custom metrics
  • Redshift provides Audit logging and AWS CloudTrail integration
  • Redshift can be easily enabled to a second region for disaster recovery.

Redshift Performance

  • Massively Parallel Processing (MPP)
    • automatically distributes data and query load across all nodes.
    • makes it easy to add nodes to the data warehouse and enables fast query performance as the data warehouse grows.
  • Columnar Data Storage
    • organizes the data by column, as column-based systems are ideal for data warehousing and analytics, where queries often involve aggregates performed over large data sets
    • columnar data is stored sequentially on the storage media, and require far fewer I/Os, greatly improving query performance
  • Advance Compression
    • Columnar data stores can be compressed much more than row-based data stores because similar data is stored sequentially on a disk.
    • employs multiple compression techniques and can often achieve significant compression relative to traditional relational data stores.
    • doesn’t require indexes or materialized views and so uses less space than traditional relational database systems.
    • automatically samples the data and selects the most appropriate compression scheme, when the data is loaded into an empty table
  • Query Optimizer
    • Redshift query run engine incorporates a query optimizer that is MPP-aware and also takes advantage of columnar-oriented data storage.
  • Result Caching
    • Redshift caches the results of certain types of queries in memory on the leader node.
    • When a user submits a query, Redshift checks the results cache for a valid, cached copy of the query results. If a match is found in the result cache, Redshift uses the cached results and doesn’t run the query.
    • Result caching is transparent to the user.
  • Complied Code
    • Leader node distributes fully optimized compiled code across all of the nodes of a cluster. Compiling the query decreases the overhead associated with an interpreter and therefore increases the runtime speed, especially for complex queries.

Redshift Architecture

Redshift Architecture

  • Clusters
    • Core infrastructure component of a Redshift data warehouse
    • Cluster is composed of one or more compute nodes.
    • If a cluster is provisioned with two or more compute nodes, an additional leader node coordinates the compute nodes and handles external communication.
    • Client applications interact directly only with the leader node.
    • Compute nodes are transparent to external applications.
  • Leader node
    • Leader node manages communications with client programs and all communication with compute nodes.
    • It parses and develops execution plans to carry out database operations
    • Based on the execution plan, the leader node compiles code, distributes the compiled code to the compute nodes, and assigns a portion of the data to each compute node.
    • Leader node distributes SQL statements to the compute nodes only when a query references tables that are stored on the compute nodes. All other queries run exclusively on the leader node.
  • Compute nodes
    • Leader node compiles code for individual elements of the execution plan and assigns the code to individual compute nodes.
    • Compute nodes execute the compiled code and send intermediate results back to the leader node for final aggregation.
    • Each compute node has its own dedicated CPU, memory, and attached disk storage, which is determined by the node type.
    • As the workload grows, the compute and storage capacity of a cluster can be increased by increasing the number of nodes, upgrading the node type, or both.
  • Node slices
    • A compute node is partitioned into slices.
    • Each slice is allocated a portion of the node’s memory and disk space, where it processes a portion of the workload assigned to the node.
    • Leader node manages distributing data to the slices and apportions the workload for any queries or other database operations to the slices. The slices then work in parallel to complete the operation.
    • Number of slices per node is determined by the node size of the cluster.
    • When a table is created, one column can optionally be specified as the distribution key. When the table is loaded with data, the rows are distributed to the node slices according to the distribution key that is defined for a table.
    • A good distribution key enables Redshift to use parallel processing to load data and execute queries efficiently.
  • Managed Storage
    • Data warehouse data is stored in a separate storage tier Redshift Managed Storage (RMS).
    • RMS provides the ability to scale the storage to petabytes using S3 storage.
    • RMS enables scale, pay for compute and storage independently so that the cluster can be sized based only on the computing needs.
    • RMS automatically uses high-performance SSD-based local storage as tier-1 cache.
    • It also takes advantage of optimizations, such as data block temperature, data block age, and workload patterns to deliver high performance while scaling storage automatically to S3 when needed without requiring any action.

Redshift Serverless

  • Redshift Serverless is a serverless option of Redshift that makes it more efficient to run and scale analytics in seconds without the need to set up and manage data warehouse infrastructure.
  • Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for demanding and unpredictable workloads.
  • Redshift Serverless helps any user to get insights from data by simply loading and querying data in the data warehouse.
  • Redshift Serverless supports concurrency Scaling feature that can support unlimited concurrent users and concurrent queries, with consistently fast query performance.
  • When concurrency scaling is enabled, Redshift automatically adds cluster capacity when the cluster experiences an increase in query queuing.
  • Redshift Serverless measures data warehouse capacity in Redshift Processing Units (RPUs). RPUs are resources used to handle workloads.
  • Redshift Serverless supports workgroups and namespaces to isolate workloads and manage different resources.

Redshift Single vs Multi-Node Cluster

  • Single Node
    • Single node configuration enables getting started quickly and cost-effectively & scale up to a multi-node configuration as the needs grow
  • Multi-Node
    • Multi-node configuration requires a leader node that manages client connections and receives queries, and two or more compute nodes that store data and perform queries and computations.
    • Leader node
      • provisioned automatically and not charged for
      • receives queries from client applications, parses the queries, and develops execution plans, which are an ordered set of steps to process these queries.
      • coordinates the parallel execution of these plans with the compute nodes, aggregates the intermediate results from these nodes, and finally returns the results back to the client applications.
    • Compute node
      • can contain from 1-128 compute nodes, depending on the node type
      • executes the steps specified in the execution plans and transmits data among themselves to serve these queries.
      • intermediate results are sent back to the leader node for aggregation before being sent back to the client applications.
      • supports Dense Storage or Dense Compute nodes (DC) instance type
        • Dense Storage (DS) allows the creation of very large data warehouses using hard disk drives (HDDs) for a very low price point
        • Dense Compute (DC) allows the creation of very high-performance data warehouses using fast CPUs, large amounts of RAM and solid-state disks (SSDs)
      • direct access to compute nodes is not allowed

Redshift Multi-AZ

  • Redshift Multi-AZ deployment runs the data warehouse in multiple AWS AZs simultaneously and continues operating in unforeseen failure scenarios.
  • Multi-AZ deployment is managed as a single data warehouse with one endpoint and does not require any application changes.
  • Multi-AZ deployments support high availability requirements and reduce recovery time by guaranteeing capacity to automatically recover and are intended for customers with business-critical analytics applications that require the highest levels of availability and resiliency to AZ failures.
  • Redshift Multi-AZ supports RPO = 0 meaning data is guaranteed to be current and up to date in the event of a failure. RTO is under a minute.

Redshift Availability & Durability

  • Redshift replicates the data within the data warehouse cluster and continuously backs up the data to S3 (11 9’s durability).
  • Redshift mirrors each drive’s data to other nodes within the cluster.
  • Redshift will automatically detect and replace a failed drive or node.
  • RA3 clusters and Redshift serverless are not impacted the same way since the data is stored in S3 and the local drive is just used as a data cache.
  • If a drive fails,
    • cluster will remain available in the event of a drive failure.
    • the queries will continue with a slight latency increase while Redshift rebuilds the drive from the replica of the data on that drive which is stored on other drives within that node.
    • single node clusters do not support data replication and the cluster needs to be restored from a snapshot on S3.
  • In case of node failure(s), Redshift
    • automatically provisions new node(s) and begins restoring data from other drives within the cluster or from S3.
    • prioritizes restoring the most frequently queried data so the most frequently executed queries will become performant quickly.
    • cluster will be unavailable for queries and updates until a replacement node is provisioned and added to the cluster.
  • In case of Redshift cluster AZ goes down, Redshift
    • cluster is unavailable until power and network access to the AZ are restored
    • cluster’s data is preserved and can be used once AZ becomes available
    • cluster can be restored from any existing snapshots to a new AZ within the same region

Redshift Backup & Restore

  • Redshift always attempts to maintain at least three copies of the data – Original, Replica on the compute nodes, and a backup in S3.
  • Redshift replicates all the data within the data warehouse cluster when it is loaded and also continuously backs up the data to S3.
  • Redshift enables automated backups of the data warehouse cluster with a 1-day retention period, by default, which can be extended to max 35 days.
  • Automated backups can be turned off by setting the retention period as 0.
  • Redshift can also asynchronously replicate the snapshots to S3 in another region for disaster recovery.
  • Redshift only backs up data that has changed.
  • Restoring the backup will provision a new data warehouse cluster.

Redshift Scalability

  • Redshift allows scaling of the cluster either by
    • increasing the node instance type (Vertical scaling)
    • increasing the number of nodes (Horizontal scaling)
  • Redshift scaling changes are usually applied during the maintenance window or can be applied immediately
  • Redshift scaling process
    • existing cluster remains available for read operations only, while a new data warehouse cluster gets created during scaling operations
    • data from the compute nodes in the existing data warehouse cluster is moved in parallel to the compute nodes in the new cluster
    • when the new data warehouse cluster is ready, the existing cluster will be temporarily unavailable while the canonical name record of the existing cluster is flipped to point to the new data warehouse cluster

Redshift Security

  • Redshift supports encryption at rest and in transit
  • Redshift provides support for role-based access control – RBAC. Row-level access control helps assign one or more roles to a user and assign system and object permissions by role.
  • Reshift supports Lambda User-defined Functions – UDFs to enable external tokenization, data masking, identification or de-identification of data by integrating with vendors like Protegrity, and protect or unprotect sensitive data based on a user’s permissions and groups, in query time.
  • Redshift supports Single Sign-On SSO and integrates with other third-party corporate or other SAML-compliant identity providers.
  • Redshift supports multi-factor authentication (MFA) for additional security when authenticating to the Redshift cluster.
  • Redshift supports encrypting an unencrypted cluster using KMS. However, you can’t enable hardware security module (HSM) encryption by modifying the cluster. Instead, create a new, HSM-encrypted cluster and migrate your data to the new cluster.
  • Redshift enhanced VPC routing forces all COPY and UNLOAD traffic between the cluster and the data repositories through the VPC.

Redshift Advanced Topics

Redshift Best Practices

Refer blog post Redshift Best Practices

Redshift vs EMR vs RDS

  • RDS is ideal for
    • structured data and running traditional relational databases while offloading database administration
    • for online-transaction processing (OLTP) and for reporting and analysis
  • Redshift is ideal for
    • large volumes of structured data that needs to be persisted and queried using standard SQL and existing BI tools
    • analytic and reporting workloads against very large data sets by harnessing the scale and resources of multiple nodes and using a variety of optimizations to provide improvements over RDS
    • preventing reporting and analytic processing from interfering with the performance of the OLTP workload
  • EMR is ideal for
    • processing and transforming unstructured or semi-structured data to bring in to Amazon Redshift and
    • for data sets that are relatively transitory, not stored for long-term use.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. With which AWS services CloudHSM can be used (select 2)
    1. S3
    2. DynamoDB
    3. RDS
    4. ElastiCache
    5. Amazon Redshift
  2. You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors, which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements?
    1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance (RDS instance will not support data for 2 years)
    2. Ingest data into a DynamoDB table and move old data to a Redshift cluster (Handle 10K IOPS ingestion and store data into Redshift for analysis)
    3. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage (Does not handle the ingestion issue)
    4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS (RDS instance will not support data for 2 years)
  3. Which two AWS services provide out-of-the-box user configurable automatic backup-as-a-service and backup rotation options? Choose 2 answers
    1. Amazon S3
    2. Amazon RDS
    3. Amazon EBS
    4. Amazon Redshift
  4. Your department creates regular analytics reports from your company’s log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data?
    1. Use reduced redundancy storage (RRS) for PDF and CSV data in Amazon S3. Add Spot instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. (Spot instances impacts performance)
    2. Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability)
    3. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift (Spot instances impacts performance)
    4. Use reduced redundancy storage (RRS) for PDF and CSV data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift. (Spot instances impacts performance and Spot instance not available for Redshift)

References

AWS Kinesis Data Streams vs Kinesis Data Firehose

Kinesis Data Streams vs. Kinesis Data Firehose

AWS Kinesis Data Streams vs Kinesis Data Firehose

  • Kinesis acts as a highly available conduit to stream messages between data producers and data consumers.
  • Data producers can be almost any source of data: system or web log data, social network data, financial trading information, geospatial data, mobile app data, or telemetry from connected IoT devices.
  • Data consumers will typically fall into the category of data processing and storage applications such as Apache Hadoop, Apache Storm, S3, and ElasticSearch.

Kinesis Data Streams vs. Firehose

Purpose

  • Kinesis data streams is highly customizable and best suited for developers building custom applications or streaming data for specialized needs.
  • Kinesis Data Firehose handles loading data streams directly into AWS products for processing.  Firehose also allows for streaming to S3, OpenSearch Service, or Redshift, where data can be copied for processing through additional services.

Provisioning & Scaling

  • Kinesis Data Streams needs you to configure shards, scale, and write your own custom applications for both producers and consumers. KDS requires manual scaling and provisioning.
  • Kinesis Data Firehose is fully managed and sends data to S3, Redshift, and OpenSearch. Scaling is handled automatically, up to gigabytes per second, and allows for batching, encrypting, and compressing.

Processing Delay

  • Kinesis Data Streams provides real-time processing with ~200 ms for shared throughput classic single consumer and ~70 ms for the enhanced fan-out consumer.
  • Kinesis Data Firehose provides near real-time processing with the lowest buffer time of 1 min.

Data Storage

  • Kinesis Data Streams provide data storage. Data typically is made available in a stream for 24 hours, but for an additional cost, users can gain data availability for up to 365 days.
  • Kinesis Data Firehose does not provide data storage.

Replay

  • Kinesis Data Streams supports replay capability
  • Kinesis Data Firehose does not support replay capability

Producers & Consumers

  • Kinesis Data Streams & Kinesis Data Firehose support multiple producer options including SDK, KPL, Kinesis Agent, IoT, etc.
  • Kinesis Data Streams support multiple consumers option including SDK, KCL, and Lambda, and can write data to multiple destinations. However, they have to be coded. Kinesis Data Firehose consumers are close-ended and support destinations like S3, Redshift, OpenSearch, and other third-party tools.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your organization needs to ingest a big data stream into its data lake on Amazon S3. The data may stream in at a rate of hundreds of megabytes per second. What AWS service will accomplish the goal with the least amount of management?
    1. Amazon Kinesis Firehose
    2. Amazon Kinesis Streams
    3. Amazon CloudFront
    4. Amazon SQS
  2. Your organization is looking for a solution that can help the business with streaming data several services will require access to read and process the same stream concurrently. What AWS service meets the business requirements?
    1. Amazon Kinesis Firehose
    2. Amazon Kinesis Streams
    3. Amazon CloudFront
    4. Amazon SQS
  3. Your application generates a 1 KB JSON payload that needs to be queued and delivered to EC2 instances for applications. At the end of the day, the application needs to replay the data for the past 24 hours. In the near future, you also need the ability for other multiple EC2 applications to consume the same stream concurrently. What is the best solution for this?
    1. Kinesis Data Streams
    2. Kinesis Firehose
    3. SNS
    4. SQS

References

AWS Kinesis Data Firehose – KDF

Kinesis Data Firehose

AWS Kinesis Data Firehose – KDF

  • Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data
  • Kinesis Data Firehose automatically scales to match the throughput of the data and requires no ongoing administration or need to write applications or manage resources
  • is a data transfer solution for delivering real-time streaming data to destinations such as S3,  Redshift,  Elasticsearch service, and Splunk.
  • is NOT Real Time, but Near Real Time as it supports batching and buffers streaming data to a certain size (Buffer Size in MBs) or for a certain period of time (Buffer Interval in seconds) before delivering it to destinations.
  • supports data compression, minimizing the amount of storage used at the destination. It currently supports GZIP, ZIP, and SNAPPY compression formats. Only GZIP is supported if the data is further loaded to Redshift.
  • supports data at rest encryption using KMS after the data is delivered to the S3 bucket.
  • supports multiple producers as datasource, which include Kinesis data stream, Kinesis Agent, or the Kinesis Data Firehose API using the AWS SDK, CloudWatch Logs, CloudWatch Events, or AWS IoT
  • supports out of box data transformation as well as custom transformation using the Lambda function to transform incoming source data and deliver the transformed data to destinations
  • supports source record backup with custom data transformation with Lambda, where Kinesis Data Firehose will deliver the un-transformed incoming data to a separate S3 bucket.
  • uses at least once semantics for data delivery. In rare circumstances such as request timeout upon data delivery attempt, delivery retry by Firehose could introduce duplicates if the previous request eventually goes through.
  • supports Interface VPC Interface Endpoint (AWS Private Link) to keep traffic between the VPC and Kinesis Data Firehose from leaving the Amazon network.

Kinesis Data Firehose

Kinesis Key Concepts

  • Kinesis Data Firehose delivery stream
    • Underlying entity of Kinesis Data Firehose, where the data is sent
  • Record
    • Data sent by data producer to a Kinesis Data Firehose delivery stream
    • Maximum size of a record (before Base64-encoding) is 1024 KB.
  • Data producer
    • Producers send records to Kinesis Data Firehose delivery streams.
  • Buffer size and buffer interval
    • Kinesis Data Firehose buffers incoming streaming data to a certain size or for a certain time period before delivering it to destinations
    • Buffer size and buffer interval can be configured while creating the delivery stream
    • Buffer size is in MBs and ranges from 1MB to 128MB for the S3 destination and 1MB to 100MB for the OpenSearch Service destination.
    • Buffer interval is in seconds and ranges from 60 secs to 900 secs
    • Firehose raises buffer size dynamically to catch up and make sure that all data is delivered to the destination, if data delivery to the destination is falling behind data writing to the delivery stream
    • Buffer size is applied before compression.
  • Destination
    • A destination is the data store where the data will be delivered.
    • supports S3,  Redshift, Elasticsearch, and Splunk as destinations.

Kinesis Data Firehose vs Kinesis Data Streams

Kinesis Data Streams vs. Kinesis Data Firehose

AWS Certification Exam Practice Questions

  1. A user is designing a new service that receives location updates from 3600 rental cars every hour. The cars location needs to be uploaded to an Amazon S3 bucket. Each location must also be checked for distance from the original rental location. Which services will process the updates and automatically scale? ​
    1. Amazon EC2 and Amazon EBS
    2. Amazon Kinesis Firehose and Amazon S3
    3. Amazon ECS and Amazon RDS
    4. Amazon S3 events and AWS Lambda
  2. You need to perform ad-hoc SQL queries on massive amounts of well-structured data. Additional data comes in constantly at a high velocity, and you don’t want to have to manage the infrastructure processing it if possible. Which solution should you use?
    1. Kinesis Firehose and RDS
    2. EMR running Apache Spark
    3. Kinesis Firehose and Redshift
    4. EMR using Hive
  3. Your organization needs to ingest a big data stream into their data lake on Amazon S3. The data may stream in at a rate of hundreds of megabytes per second. What AWS service will accomplish the goal with the least amount of management?
    1. Amazon Kinesis Firehose
    2. Amazon Kinesis Streams
    3. Amazon CloudFront
    4. Amazon SQS
  4. A startup company is building an application to track the high scores for a popular video game. Their Solution Architect is tasked with designing a solution to allow real-time processing of scores from millions of players worldwide. Which AWS service should the Architect use to provide reliable data ingestion from the video game into the datastore?
    1. AWS Data Pipeline
    2. Amazon Kinesis Firehose
    3. Amazon DynamoDB Streams
    4. Amazon Elasticsearch Service
  5. A company has an infrastructure that consists of machines which keep sending log information every 5 minutes. The number of these machines can run into thousands and it is required to ensure that the data can be analyzed at a later stage. Which of the following would help in fulfilling this requirement?
    1. Use Kinesis Firehose with S3 to take the logs and store them in S3 for further processing.
    2. Launch an Elastic Beanstalk application to take the processing job of the logs.
    3. Launch an EC2 instance with enough EBS volumes to consume the logs which can be used for further processing.
    4. Use CloudTrail to store all the logs which can be analyzed at a later stage.

References

Amazon QuickSight

Amazon QuickSight

  • QuickSight is a very fast, easy-to-use, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from their data, anytime, on any device.
  • enables organizations to scale their business analytics capabilities to hundreds of thousands of users, and delivers fast and responsive query performance by using SPICE – a robust in-memory engine.
  • supports various datasources including
    • Excel files and flat files like CSV, TSV, CLF, ELF
    • On-premises databases like PostgreSQL, SQL Server and MySQL
    • SaaS applications like Salesforce
    • AWS data sources such as Redshift, RDS, Aurora, Athena, and S3
  • supports various functions to format and transform the data.
    • alias data fields and change data types.
    • subset the data using built in filters and perform database join operations using drag and drop.
    • create calculated fields using mathematical operations and built-in functions such conditional statements, string, numerical and date functions
  • supports assorted visualizations that facilitate different analytical approaches:
    • Comparison and distribution – Bar charts (several assorted variants)
    • Changes over time – Line graphs, Area line charts
    • Correlation – Scatter plots, Heat maps
    • Aggregation – Pie graphs, Tree maps
    • Tabular – Pivot tables
  • comes with a built-in suggestion engine that provides suggested visualizations based on the properties of the underlying datasets
  • support Stories, that provide guided tours through specific views of an analysis. They are used to convey key points, a thought process, or the evolution of an analysis for collaboration.

QuickSight Architecture

Super-fast, Parallel, In-memory Calculation Engine – SPICE

  • QuickSight is built with “SPICE” – a Super-fast, Parallel, In-memory Calculation Engine
  • SPICE uses a combination of columnar storage, in-memory technologies enabled through the latest hardware innovations and machine code generation to run interactive queries on large datasets and get rapid responses.
  • SPICE supports rich data discovery and business analytics capabilities to help customers derive valuable insights from their data without worrying about provisioning or managing infrastructure.
  • SPICE supports rich calculations to help derive valuable insights from the analysis without worrying about provisioning or managing infrastructure.
  • Data in SPICE is persisted until it is explicitly deleted by the user.
  • QuickSight can also be  configured to keep the data in SPICE up-to-date as the data in the underlying sources change.
  • SPICE automatically replicates data for high availability and enables QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wide variety of AWS data sources.

Quick Sight Authors and Readers

  • QuickSight Author is a user who
    • can connect to data sources (within AWS or outside), create visuals and analyze data.
    • can create interactive dashboards using advanced QuickSight capabilities such as parameters and calculated fields, and publish dashboards with other users in the account.
  • QuickSight Reader is a user who
    • consumes interactive dashboards.
    • can log in via their organization’s preferred authentication mechanism (QuickSight username/password, SAML portal or AD auth), view shared dashboards, filter data, drill down to details or export data as a CSV file, using a web browser or mobile app.
    • Readers do not have any allocated SPICE capacity.

QuickSight Security

  • QuickSight supports multi-factor authentication (MFA) for the AWS account via the AWS Management console.
  • For VPC with public connectivity, QuickSight’s IP address range can be added to the database instances’ security group rules to enable traffic flow into the VPC and database instances.
  • QuickSight supports Row-level security (RLS)
    • RLS enables dataset owners to control access to data at row granularity based on permissions associated with the user interacting with the data.
    • With RLS, QuickSight users only need to manage a single set of data and apply appropriate row-level dataset rules to it.
    • All associated dashboards and analyses will enforce these rules, simplifying dataset management and removing the need to maintain multiple datasets for users with different data access privileges.
  • QuickSight supports Private VPC (Virtual Private Cloud) Access, which uses an Elastic Network Interface (ENI) for secure, private communication with data sources in a VPC. It also allows the use of AWS Direct Connect to create a secure, private link with the on-premises resources.
  • QuickSight supports users defined via IAM or email signup.

QuickSight Enterprise Edition

  • QuickSight Enterprise Edition offers enhanced functionality which includes
    • QuickSight Readers,
    • Connectivity to data sources in Private VPC,
    • Row-level security,
    • Hourly refresh of SPICE data
    • Encryption at Rest
    • AD connectivity and group-based management of assets for AD accounts

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are using QuickSight to identify demand trends over multiple months for your top five product lines. Which type of visualization do you choose?
    1. Scatter Plot
    2. Pie Chart
    3. Pivot Table
    4. Line Chart
  2. You need to provide customers with rich visualizations that allow you to easily connect multiple disparate data sources in S3, Redshift, and several CSV files. Which tool should you use that requires the least setup?
    1. Hue on EMR
    2. Redshift
    3. QuickSight
    4. Elasticsearch

References

Amazon_QuickSight

 

AWS Database Migration Service – DMS

DMS Migration

AWS Database Migration Service – DMS

  • AWS Database Migration Service enables quick and secure data migration with minimal to zero downtime.
  • Database Migration Service helps migration to AWS with virtually no downtime. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
  • AWS DMS can migrate
    • relational databases, data warehouses, NoSQL databases, and other types of data stores
    • data to and from the most widely used commercial and open-source databases.
  • DMS supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations (using SCT) between different database platforms, such as Oracle or Microsoft SQL Server to Aurora.
  • AWS Schema Conversion Tool helps in heterogeneous database migrations by automatically converting the source database schema and a majority of the database code objects, including views, stored procedures, and functions, to a format compatible with the target database.
  • DMS enables both one-time migration and continuous data replication with high availability and consolidates databases into a petabyte-scale data warehouse by streaming data to Redshift and S3.
  • DMS continually monitors source and target databases, network connectivity, and the replication instance.
  • DMS automatically manages all of the infrastructure that supports the migration server, including hardware and software, software patching, and error reporting.
  • DMS is highly resilient and self–healing. If the primary replication server fails for any reason, a backup replication server can take over with little or no interruption of service.
  • In case of interruption, DMS automatically restarts the process and continues the migration from where it was halted.
  • AWS DMS supports the Multi-AZ option to provide high availability for database migration and continuous data replication by enabling redundant replication instances.
  • AWS DMS ensures that the data migration is secure. Data at rest is encrypted with AWS KMS encryption. During migration, SSL can be used to encrypt the in-flight data as it travels from source to target.
  • AWS DMS Fleet Advisor is a free, fully managed capability of AWS DMS that automates migration planning and helps you migrate database and analytics fleets to the cloud at scale with minimal effort.

Database Migration Service Components

DMS Migration

DMS Replication Instance

  • A DMS replication instance performs the actual data migration between the source and the target.
  • DMS replication instance is a managed EC2 instance that hosts one or more replication tasks.
  • The replication instance also caches the transaction logs during the migration.
  • CPU and memory capacity of the replication instance influences the overall time required for the migration.
  • DMS can provide high availability and failover support using a Multi-AZ deployment.
    • In a Multi-AZ deployment, DMS automatically provisions and maintains a standby replica of the replication instance in a different AZ
    • Primary replication instance is synchronously replicated to the standby replica.
    • If the primary replication instance fails or becomes unresponsive, the standby resumes any running tasks with minimal interruption.
    • Because the primary is constantly replicating its state to the standby, Multi-AZ deployment does incur some performance overhead.

Endpoints

  • AWS DMS uses an endpoint to access the source or target data store.

Replication tasks

  • DMS replication task helps move a set of data from the source endpoint to the target endpoint.
  • Replication task required Replication instance, source, and target endpoints
  • Replication task supports following migration type options
    • Full load (Migrate existing data) – Migrate the data from the source to the target database as a one-time migration.
    • CDC only (Replicate data changes only) – Replicate only changes, while using native export tools for performing bulk data load.
    • Full load + CDC (Migrate existing data and replicate ongoing changes) –  Performs a full data load while capturing changes on the source. After the full load is complete, captured changes are applied to the target. Once the changes reach a steady state, the applications can be switched over to the target.
  • LOB mode options
    • Don’t include LOB columns – LOB columns are excluded
    • Full LOB mode – Migrate complete LOBs regardless of size. AWS DMS migrates LOBs piecewise in chunks controlled by the Max LOB Size parameter. This mode is slower than using limited LOB mode.
    • Limited LOB mode – Truncate LOBs to the value specified by the Max LOB Size parameter. This mode is faster than using full LOB mode.
  • Data validation – If the data validation needs to be performed, once the migration has been completed.

AWS Schema Conversion Tool

  • AWS Schema Conversion Tool helps in heterogeneous database migrations by automatically converting the source database schema and a majority of the database code objects, including views, stored procedures, and functions, to a format compatible with the target database.
  • DMS and SCT work in conjunction to migrate databases and support ongoing replication for a variety of uses such as populating data marts, synchronizing systems, etc.
  • SCT can copy database schemas for homogeneous migrations and convert them for heterogeneous migrations.
  • SCT clearly marks any objects that cannot be automatically converted so that they can be manually converted to complete the migration.
  • SCT can scan the application source code for embedded SQL statements and convert them as part of a database schema conversion project.
  • SCT performs cloud-native code optimization by converting legacy Oracle and SQL Server functions to their equivalent AWS service, thus helping modernize the applications simultaneously during database migration.
  • Once schema conversion is complete, SCT can help migrate data from a range of data warehouses to Redshift using built-in data migration agents.

AWS SCT DMS Heterogeneous Migration

Database Migration Service Best Practices

  • DMS Performance
    • In full load, multiple tables are loaded in parallel and it is recommended to drop primary key indexes, secondary indexes, referential integrity constraints, and data manipulation language (DML) triggers.
    • For a full load + CDC task, it is recommended to add secondary indexes before the CDC phase. Because AWS DMS uses logical replication, secondary indexes that support DML operations should be in-place to prevent full table scans.
    • Replication task can be paused before the CDC phase to build indexes, create triggers, and create referential integrity constraints
    • Use multiple tasks for a single migration to improve performance
    • Disable backups and Multi-AZ on the target until ready to cut over.
  • Migration LOBs
    • DMS migrates LOBS in a two step process
      • creates a new row in the target table and populates the row with all data except the associated LOB value.
      • Update the row in the target table with the LOB data.
    • All LOB columns on the target table must be nullable
    • Limited LOB mode
      • default for all migration tasks
      • migrates all LOB values up to a user-specified size limit, default 32K
      • LOB values larger than the size limit must be manually migrated. typically provides the best performance.
      • Ensure that the Max LOB size parameter setting is set to the largest LOB size for all the tables.
    • Full LOB mode
      • migrates all LOB data in the tables, regardless of size.
      • provides the convenience of moving all LOB data in the tables, but the process can have a significant impact on performance.
  • Migrating Large Tables
    • Break the migration into more than one task.
    • Using row filtering, use a key or a partition key to create multiple tasks
  • Convert schema
    • Use SCT to convert the source objects, table, indexes, views, triggers, and other system objects into the target DDL format
    • DMS doesn’t perform schema or code conversion
  • Replication
    • Enable Multi-AZ for ongoing replication (for high availability and failover support)
  • DMS can read/write from/to encrypted DBs

DMS Fleet Advisor

  • AWS DMS Fleet Advisor is a free, fully managed capability of AWS DMS that automates migration planning and helps you migrate database and analytics fleets to the cloud at scale with minimal effort.
  • DMS Fleet Advisor is intended for users looking to migrate a large number of database and analytics servers to AWS.
  • AWS DMS Fleet Advisor helps discover and analyze the OLTP and OLAP database workloads and allows building a customized migration plan by determining the complexity of migrating the source databases to target services in AWS.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which AWS service would simplify the migration of a database to AWS?
    1. AWS Storage Gateway
    2. AWS Database Migration Service (AWS DMS)
    3. Amazon Elastic Compute Cloud (Amazon EC2)
    4. Amazon AppStream 2.0

References

AWS_Database_Migration_Service

AWS SQS Standard Queue

AWS SQS Standard Queue

  • SQS offers standard as the default queue type.
  • Standard queues support at-least-once message delivery. However, occasionally (because of the highly distributed architecture that allows nearly unlimited throughput), more than one copy of a message might be delivered out of order.
  • Standard queues support a nearly unlimited number of API calls per second, per API action (SendMessageReceiveMessage, or DeleteMessage).
  • Standard queues provide best-effort ordering which ensures that messages are generally delivered in the same order as they’re sent.

SQS Standard Queue Features

Redundant infrastructure

  • offers reliable and scalable hosted queues for storing messages
  • is engineered to always be available and deliver messages
  • provides the ability to store messages in a fail safe queue
  • highly concurrent access to messages

At-Least-Once delivery

  • ensures delivery of each message at least once
  • stores copies of the messages on multiple servers for redundancy and high availability
  • might deliver duplicate copy of messages, if the servers storing a copy of a message is unavailable when you receive or delete the message and the copy of the message is not deleted on that unavailable server
  • Applications should be designed to be idempotent with the ability to handle duplicate messages and not be adversely affected if it processes the same message more than once

Message Attributes

  • SQS messages can contain up to 10 metadata attributes.
  • take the form of name-type-value triples
  • can be used to separate the body of a message from the metadata that describes it.
  • helps process and store information with greater speed and efficiency because the applications don’t have to inspect an entire message before understanding how to process it

Message Sampling

  • behavior of retrieving messages from the queue depends on whether short (standard) polling, the default behavior, or long polling is used
  • With short polling,
    • samples only a subset of the servers (based on a weighted random distribution) and returns messages from just those servers.
    • A receive request might not return all the messages in the queue. But a subsequent receive request would return the message
  • With Long polling,
    • request persists for the time specified and returns as soon as the message is available thereby reducing costs and time the message has to dwell in the queue
    • long polling doesn’t return a response until a message arrives in the message queue, or the long poll times out.
    • makes it inexpensive to retrieve messages from the SQS queue as soon as the messages are available.
    • might help reduce the cost of using SQS, as the number of empty receives are reduced

Batching

  • SQS allows send, receive and delete batching, which helps club up to 10 messages in a single batch while charging price for a single message
  • helps lower cost and also increases the throughput

Configurable settings per queue

  • All queues don’t have to be alike

Order

  • makes a best effort to preserve order in messages does not guarantee first in, first out delivery of messages
  • can be handled by placing sequencing information within the message and performing the ordering on the client side

Loose coupling

  • removes tight coupling between components
  • provides the ability to move data between distributed components of the applications that perform different tasks without losing messages or requiring each component to be always available

Multiple writers and readers

  • supports multiple readers and writers interacting with the same queue as the same time
  • locks the message during processing, using Visibility Timeout, preventing it to be processed by any other consumer

Variable message size

  • supports message in any format up to 256KB of text.
  • messages larger than 256 KB can be managed using the S3 or DynamoDB, with SQS holding a pointer to the S3 object

Access Control

  • Access can be controlled for who can produce and consume messages to each queue

Delay Queues

  • Delay queue allows the user to set a default delay on a queue such that delivery of all messages enqueued is postponed for that time duration

Dead Letter Queues

  • Dead letter queue is a queue for messages that were not able to be processed after a maximum number of attempts
  • useful to isolate messages that can’t be processed for later analysis.

PCI Compliance

  • supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being PCI-DSS (Payment Card Industry – Data Security Standard) compliant

SQS Standard Queues vs SQS FIFO Queues

SQS Standard vs FIFO Queues

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_SQS_Standard_Queue

AWS Data Analytics Services Cheat Sheet

AWS Data Analytics Services

AWS Data Analytics Services Cheat Sheet

AWS Data Analytics Services

Kinesis Data Streams – KDS

  • enables real-time processing of streaming data at a massive scale
  • provides ordering of records per shard
  • provides an ability to read and/or replay records in the same order
  • allows multiple applications to consume the same data
  • data is replicated across three data centers within a region
  • data is preserved for 24 hours, by default, and can be extended to 365 days
  • data inserted in Kinesis, it can’t be deleted (immutability) but only expires
  • streams can be scaled using multiple shards, based on the partition key
  • each shard provides the capacity of 1MB/sec data input and 2MB/sec data output with 1000 PUT requests per second
  • Kinesis vs SQS
    • real-time processing of streaming big data vs reliable, highly scalable hosted queue for storing messages
    • ordered records, as well as the ability to read and/or replay records in the same order vs no guarantee on data ordering (with the standard queues before the FIFO queue feature was released)
    • data storage up to 24 hours, extended to 365 days vs 1 minute to extended to 14 days but cleared if deleted by the consumer.
    • supports multiple consumers vs a single consumer at a time and requires multiple queues to deliver messages to multiple consumers.
  • Kinesis Producer
    • API
      • PutRecord and PutRecords are synchronous
      • PutRecords uses batching and increases throughput
      • might experience ProvisionedThroughputExceeded Exceptions, when sending more data. Use retries with backoff, resharding, or change partition key.
    • KPL
      • producer supports synchronous or asynchronous use cases
      • supports inbuilt batching and retry mechanism
    • Kinesis Agent can help monitor log files and send them to KDS
    • supports third-party libraries like Spark, Flume, Kafka connect, etc.
  • Kinesis Consumers
    • Kinesis SDK
      • Records are polled by consumers from a shard
    • Kinesis Client Library (KCL)
      • Read records from Kinesis produced with the KPL (de-aggregation)
      • supports the checkpointing feature to keep track of the application’s state and resume progress using the DynamoDB table.
      • if application receives provisioned-throughput exceptions, increase the provisioned throughput for the DynamoDB table
    • Kinesis Connector Library – can be replaced using Firehose or Lambda
    • Third-party libraries: Spark, Log4J Appenders, Flume, Kafka Connect…
    • Kinesis Firehose, AWS Lambda
    • Kinesis Consumer Enhanced Fan-Out
      • supports Multiple Consumer applications for the same Stream
      • provides Low Latency ~70ms
      • Higher costs
  • Kinesis Security
    • allows access/authorization control using IAM policies
    • supports Encryption in flight using HTTPS endpoints
    • supports data encryption at rest using either server-side encryption with KMS or using client-side encryption before pushing the data to data streams.
    • supports VPC Endpoints to access within VPC

Kinesis Data  Firehose – KDF

  • data transfer solution for delivering near real-time streaming data to destinations such as S3,  Redshift,  OpenSearch service, and Splunk.
  • is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration
  • is Near Real Time (min. 60 secs) as it buffers incoming streaming data to a certain size or for a certain period of time before delivering it
  • supports batching, compression, and encryption of the data before loading it, minimizing the amount of storage used at the destination and increasing security
  • supports data compression, minimizing the amount of storage used at the destination. It currently supports GZIP, ZIP, and SNAPPY compression formats. Only GZIP is supported if the data is further loaded to Redshift.
  • supports out of box data transformation as well as custom transformation using Lambda function to transform incoming source data and deliver the transformed data to destinations
  • uses at least once semantics for data delivery.
  • supports multiple producers as datasource, which include Kinesis data stream, KPL, Kinesis Agent, or the Kinesis Data Firehose API using the AWS SDK, CloudWatch Logs, CloudWatch Events, or AWS IoT
  • does NOT support consumers like Spark and KCL
  • supports interface VPC endpoint to keep traffic between the VPC and Kinesis Data Firehose from leaving the Amazon network.

Kinesis Data Streams vs Kinesis Data Firehose

Kinesis Data Analytics

  • helps analyze streaming data, gain actionable insights, and respond to the business and customer needs in real time.
  • reduces the complexity of building, managing, and integrating streaming applications with other AWS service

Managed Streaming for Kafka – MSK

  • Managed Streaming for Kafka- MSK is an AWS streaming data service that manages Apache Kafka infrastructure and operations.
  • makes it easy for developers and DevOps managers to run Kafka applications and Kafka Connect connectors on AWS, without the need to become experts in operating Kafka.
  • operates, maintains, and scales Kafka clusters, provides enterprise-grade security features out of the box, and has built-in AWS integrations that accelerate development of streaming data applications.
  • always runs within a VPC managed by the MSK and is available to your own selected VPC, subnet, and security group when the cluster is setup.
  • IP addresses from the VPC are attached to the MSK resources through elastic network interfaces (ENIs), and all network traffic stays within the AWS network and is not accessible to the internet by default.
  • integrates with CloudWatch for monitoring, metrics, and logging.
  • MSK Serverless is a cluster type for MSK that makes it easy for you to run Apache Kafka clusters without having to manage compute and storage capacity.
  • supports EBS server-side encryption using KMS to encrypt storage.
  • supports encryption in transit enabled via TLS for inter-broker communication.
  • For provisioned clusters, you have three options:
    • IAM Access Control for both AuthN/Z (recommended),
    • TLS certificate authentication (CA) for AuthN and access control lists for AuthZ
    • SASL/SCRAM for AuthN and access control lists for AuthZ.
  • For serverless clusters, IAM Access Control can be used for both authentication and authorization.

Redshift

  • Redshift is a fast, fully managed data warehouse
  • provides simple and cost-effective solutions to analyze all the data using standard SQL and the existing Business Intelligence (BI) tools.
  • manages the work needed to set up, operate, and scale a data warehouse, from provisioning the infrastructure capacity to automating ongoing administrative tasks such as backups, and patching.
  • automatically monitors your nodes and drives to help you recover from failures.
  • only supported Single-AZ deployments. However, now supports Multi-AZ deployments.
  • replicates all the data within the data warehouse cluster when it is loaded and also continuously backs up your data to S3.
  • attempts to maintain at least three copies of your data (the original and replica on the compute nodes and a backup in S3).
  • supports cross-region snapshot replication to another region for disaster recovery
  • Redshift supports four distribution styles; AUTO, EVEN, KEY, or ALL.
    • KEY distribution uses a single column as distribution key (DISTKEY) and helps place matching values on the same node slice
    • Even distribution distributes the rows across the slices in a round-robin fashion, regardless of the values in any particular column
    • ALL distribution replicates whole table in every compute node.
    • AUTO distribution lets Redshift assigns an optimal distribution style based on the size of the table data
  • Redshift supports Compound and Interleaved sort keys
    • Compound key
      • is made up of all of the columns listed in the sort key definition, in the order they are listed and is more efficient when query predicates use a prefix, or query’s filter applies conditions, such as filters and joins, which is a subset of the sort key columns in order.
    • Interleaved sort key
      • gives equal weight to each column in the sort key, so query predicates can use any subset of the columns that make up the sort key, in any order.
      • Not ideal for monotonically increasing attributes
  • Import/Export Data
    • UNLOAD helps copy data from Redshift table to S3
    • COPY command
      • helps copy data from S3 to Redshift
      • also supports EMR, DynamoDB, remote hosts using SSH
      • parallelized and efficient
      • can decrypt data as it is loaded from S3
      • DON’T use multiple concurrent COPY commands to load one table from multiple files as Redshift is forced to perform a serialized load, which is much slower.
      • supports data decryption when loading data, if data encrypted
      • supports decompressing data, if data is  compressed.
    • Split the Load Data into Multiple Files
    • Load the data in sort key order to avoid needing to vacuum.
    • Use a Manifest File
      • provides Data consistency, to avoid S3 eventual consistency issues
      • helps specify different S3 locations in a more efficient way that with the use of S3 prefixes.
  • Redshift Distribution Style determines how data is distributed across compute nodes and helps minimize the impact of the redistribution step by locating the data where it needs to be before the query is executed.
  • Redshift Enhanced VPC routing forces all COPY and UNLOAD traffic between the cluster and the data repositories through the VPC.
  • Workload management (WLM) enables users to flexibly manage priorities within workloads so that short, fast-running queries won’t get stuck in queues behind long-running queries.
  • Redshift Spectrum helps query and retrieve structured and semistructured data from files in S3 without having to load the data into Redshift tables.
    • Redshift Spectrum external tables are read-only. You can’t COPY or INSERT to an external table.
  • Federated Query feature allows querying and analyzing data across operational databases, data warehouses, and data lakes.
  • Short query acceleration (SQA) prioritizes selected short-running queries ahead of longer-running queries.
  • Redshift Serverless is a serverless option of Redshift that makes it more efficient to run and scale analytics in seconds without the need to set up and manage data warehouse infrastructure.

EMR

  • is a web service that utilizes a hosted Hadoop framework running on the web-scale infrastructure of EC2 and S3
  • launches all nodes for a given cluster in the same Availability Zone, which improves performance as it provides a higher data access rate.
  • seamlessly supports Reserved, On-Demand, and Spot Instances
  • consists of Master/Primary Node for management and Slave nodes, which consist of Core nodes holding data and providing compute and Task nodes for performing tasks only.
  • is fault tolerant for slave node failures and continues job execution if a slave node goes down
  • supports Persistent and Transient cluster types
    • Persistent EMR clusters continue to run after the data processing job is complete
    • Transient EMR clusters shut down when the job or the steps (series of jobs) are complete
  • supports EMRFS which allows S3 to be used as a durable HA data storage
  • EMR Serverless helps run big data frameworks such as Apache Spark and Apache Hive without configuring, managing, and scaling clusters.
  • EMR Studio is an IDE that helps data scientists and data engineers to develop, visualize, and debug data engineering and data science applications written in R, Python, Scala, and PySpark.
  • EMR Notebooks provide a managed environment, based on Jupyter Notebook, that helps prepare and visualize data, collaborate with peers, build applications, and perform interactive analysis using EMR clusters.

Glue

  • AWS Glue is a fully managed, ETL service that automates the time-consuming steps of data preparation for analytics.
  • is serverless and supports a pay-as-you-go model.
  • handles provisioning, configuration, and scaling of the resources required to run the ETL jobs on a fully managed, scale-out Apache Spark environment.
  • helps setup, orchestrate, and monitor complex data flows.
  • supports custom Scala or Python code and import custom libraries and Jar files into the AWS Glue ETL jobs to access data sources not natively supported by AWS Glue.
  • supports server side encryption for data at rest and SSL for data in motion.
  • provides development endpoints to edit, debug, and test the code it generates.
  • AWS Glue natively supports data stored in RDS, Redshift, DynamoDB, S3, MySQL, Oracle, Microsoft SQL Server, and PostgreSQL databases in the VPC running on EC2 and Data streams from MSK, Kinesis Data Streams, and Apache Kafka.
  • Glue ETL engine to Extract, Transform, and Load data that can automatically generate Scala or Python code.
  • Glue Data Catalog is a central repository and persistent metadata store to store structural and operational metadata for all the data assets.
  • Glue Crawlers scan various data stores to automatically infer schemas and partition structures to populate the Data Catalog with corresponding table definitions and statistics.
  • Glue Job Bookmark tracks data that has already been processed during a previous run of an ETL job by persisting state information from the job run.
  • AWS Glue Streaming ETL enables performing ETL operations on streaming data using continuously-running jobs.
  • Glue provides flexible scheduler that handles dependency resolution, job monitoring, and retries.
  • Glue Studio offers a graphical interface for authoring AWS Glue jobs to process data allowing you to define the flow of the data sources, transformations, and targets in the visual interface and generating Apache Spark code on your behalf.
  • Glue Data Quality helps reduces manual data quality effort by automatically measuring and monitoring the quality of data in data lakes and pipelines.
  • Glue DataBrew is a visual data preparation tool that makes it easy for data analysts and data scientists to prepare, visualize, clean, and normalize terabytes, and even petabytes of data directly from your data lake, data warehouses, and databases, including S3, Redshift, Aurora, and RDS.

Lake Formation

  • AWS Lake Formation helps create secure data lakes, making data available for wide-ranging analytics.
  • is an integrated data lake service that helps to discover, ingest, clean, catalog, transform, and secure data and make it available for analysis and ML.
  • automatically manages access to the registered data in S3 through services including AWS Glue, Athena, Redshift, QuickSight, and EMR using Zeppelin notebooks with Apache Spark to ensure compliance with your defined policies.
  • helps configure and manage your data lake without manually integrating multiple underlying AWS services.
  • uses a shared infrastructure with AWS Glue, including console controls, ETL code creation and job monitoring, blueprints to create workflows for data ingest, the same data catalog, and a serverless architecture.
  • can manage data ingestion through AWS Glue. Data is automatically classified, and relevant data definitions, schema, and metadata are stored in the central Glue Data Catalog. Once the data is in the S3 data lake, access policies, including table-and-column-level access controls can be defined, and encryption for data at rest enforced.
  • integrates with IAM so authenticated users and roles can be automatically mapped to data protection policies that are stored in the data catalog. The IAM integration also supports Microsoft Active Directory or LDAP to federate into IAM using SAML.
  • helps centralize data access policy controls. Users and roles can be defined to control access, down to the table and column level.
  • supports private endpoints in the VPC and records all activity in AWS CloudTrail for network isolation and auditability.

QuickSight

  • is a very fast, easy-to-use, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from their data, anytime, on any device.
  • delivers fast and responsive query performance by using a robust in-memory engine (SPICE).
    • “SPICE” stands for a Super-fast, Parallel, In-memory Calculation Engine
    • can also be  configured to keep the data in SPICE up-to-date as the data in the underlying sources change.
    • automatically replicates data for high availability and enables QuickSight to scale to support users to perform simultaneous fast interactive analysis across a wide variety of AWS data sources.
  • supports
    • Excel files and flat files like CSV, TSV, CLF, ELF
    • on-premises databases like PostgreSQL, SQL Server and MySQL
    • SaaS applications like Salesforce
    • and AWS data sources such as Redshift, RDS, Aurora, Athena, and S3
  • supports various functions to format and transform the data.
  • supports assorted visualizations that facilitate different analytical approaches:
    • Comparison and distribution – Bar charts (several assorted variants)
    • Changes over time – Line graphs, Area line charts
    • Correlation – Scatter plots, Heat maps
    • Aggregation – Pie graphs, Tree maps
    • Tabular – Pivot tables

Data Pipeline

  • orchestration service that helps define data-driven workflows to automate and schedule regular data movement and data processing activities
  • integrates with on-premises and cloud-based storage systems
  • allows scheduling, retry, and failure logic for the workflows

Elasticsearch

  • Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS Cloud.
  • Elasticsearch provides
    • real-time, distributed search and analytics engine
    • ability to provision all the resources for Elasticsearch cluster and launches the cluster
    • easy to use cluster scaling options. Scaling Elasticsearch Service domain by adding or modifying instances, and storage volumes is an online operation that does not require any downtime.
    • provides self-healing clusters, which automatically detects and replaces failed Elasticsearch nodes, reducing the overhead associated with self-managed infrastructures
    • domain snapshots to back up and restore ES domains and replicate domains across AZs
    • enhanced security with IAM, Network, Domain access policies, and fine-grained access control
    • storage volumes for the data using EBS volumes
    • ability to span cluster nodes across multiple AZs in the same region, known as zone awareness,  for high availability and redundancy.  Elasticsearch Service automatically distributes the primary and replica shards across instances in different AZs.
    • dedicated master nodes to improve cluster stability
    • data visualization using the Kibana tool
    • integration with CloudWatch for monitoring ES domain metrics
    • integration with CloudTrail for auditing configuration API calls to ES domains
    • integration with S3, Kinesis, and DynamoDB for loading streaming data
    • ability to handle structured and Unstructured data
    • supports encryption at rest through KMS, node-to-node encryption over TLS, and the ability to require clients to communicate with HTTPS

Athena

  • Amazon Athena is a serverless, interactive analytics service built on open-source frameworks, supporting open-table and file formats.
  • provides a simplified, flexible way to analyze petabytes of data in an S3 data lake and 30 data sources, including on-premises data sources or other cloud systems using SQL or Python without loading the data.
  • is built on open-source Trino and Presto engines and Apache Spark frameworks, with no provisioning or configuration effort required.
  • is highly available and runs queries using compute resources across multiple facilities, automatically routing queries appropriately if a particular facility is unreachable
  • can process unstructured, semi-structured, and structured datasets.
  • integrates with QuickSight for visualizing the data or creating dashboards.
  • supports various standard data formats, including CSV, TSV, JSON, ORC, Avro, and Parquet.
  • supports compressed data in Snappy, Zlib, LZO, and GZIP formats. You can improve performance and reduce costs by compressing, partitioning, and using columnar formats.
  • can handle complex analysis, including large joins, window functions, and arrays
  • uses a managed Glue Data Catalog to store information and schemas about the databases and tables that you create for the data stored in S3
  • uses schema-on-read technology, which means that the table definitions are applied to the data in S3 when queries are being applied. There’s no data loading or transformation required. Table definitions and schema can be deleted without impacting the underlying data stored in S3.
  • supports fine-grained access control with AWS Lake Formation which allows for centrally managing permissions and access control for data catalog resources in the S3 data lake.

AWS SQS FIFO Queue

AWS SQS FIFO Queue

  • SQS FIFO Queue provides enhanced messaging between applications with the additional features
    • FIFO (First-In-First-Out) delivery
      • order in which messages are sent and received is strictly preserved
      • key when the order of operations & events is critical
    • Exactly-once processing
      • a message is delivered once and remains available until consumer processes and deletes it
      • key when duplicates can’t be tolerated.
      • limited to 300 or 3000 (with batching) transactions per second (TPS)
  • FIFO queues provide all the capabilities of Standard queues, improve upon, and complement the standard queue.
  • FIFO queues support message groups that allow multiple ordered message groups within a single queue.
  • FIFO Queue name should end with .fifo
  • SQS Buffered Asynchronous Client doesn’t currently support FIFO queues
  • Not all the AWS Services support FIFO like
    • Auto Scaling Lifecycle Hooks
    • AWS IoT Rule Actions
    • AWS Lambda Dead-Letter Queues
    • Amazon S3 Event Notifications
  • SQS FIFO supports one or more producers and messages are stored in the order that they were successfully received by SQS.
  • SQS FIFO queues don’t serve messages from the same message group to more than one consumer at a time.

Message Deduplication

  • SQS APIs provide deduplication functionality that prevents message producers from sending duplicates.
  • Message deduplication ID is the token used for the deduplication of sent messages.
  • If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren’t delivered during the 5-minute deduplication interval.
  • So basically, any duplicates introduced by the message producer are removed within a 5-minute deduplication interval
  • Message deduplication applies to an entire queue, not to individual message groups

Message groups

  • Messages are grouped into distinct, ordered “bundles” within a FIFO queue
  • Message group ID is the tag that specifies that a message belongs to a specific message group
  • For each message group ID, all messages are sent and received in strict order
  • However, messages with different message group ID values might be sent and received out of order.
  • Every message must be associated with a message group ID, without which the action fails.
  • SQS delivers the messages in the order in which they arrive for processing if multiple hosts (or different threads on the same host) send messages with the same message group ID

SQS Standard Queues vs SQS FIFO Queues

SQS Standard vs FIFO Queues

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A restaurant reservation application needs the ability to maintain a waiting list. When a customer tries to reserve a table, and none are available, the customer must be put on the waiting list, and the application must notify the customer when a table becomes free. What service should the Solutions Architect recommend to ensure that the system respects the order in which the customer requests are put onto the waiting list?
    1. Amazon SNS
    2. AWS Lambda with sequential dispatch
    3. A FIFO queue in Amazon SQS
    4. A standard queue in Amazon SQS
  2. In relation to Amazon SQS, how can you ensure that messages are delivered in order? Select 2 answers
    1. Increase the size of your queue
    2. Send them with a timestamp
    3. Using FIFO queues
    4. Give each message a unique id
    5. Use sequence number within the messages with Standard queues
  3. A company has run a major auction platform where people buy and sell a wide range of products. The platform requires that transactions from buyers and sellers get processed in exactly the order received. At the moment, the platform is implemented using RabbitMQ, which is a light weighted queue system. The company consulted you to migrate the on-premise platform to AWS. How should you design the migration plan? (Select TWO)
    1. When the bids are received, send the bids to an SQS FIFO queue before they are processed.
    2. When the users have submitted the bids from frontend, the backend service delivers the messages to an SQS standard queue.
    3. Add a message group ID to the messages before they are sent to the SQS queue so that the message processing is in a strict order.  
    4. Use an EC2 or Lambda to add a deduplication ID to the messages before the messages are sent to the SQS queue to ensure that bids are processed in the right order.

References

AWS_SQS_FIFO_Queue