AWS Kinesis Data Streams – KDS

AWS Kinesis Data Streams – KDS

  • Amazon Kinesis Data Streams is a streaming data service that enables real-time processing of streaming data at a massive scale.
  • Kinesis Streams enables building of custom applications that process or analyze streaming data for specialized needs.
  • Kinesis Streams features
    • handles provisioning, deployment, ongoing-maintenance of hardware, software, or other services for the data streams.
    • manages the infrastructure, storage, networking, and configuration needed to stream the data at the level of required data throughput.
    • synchronously replicates data across three AZs in an AWS Region, providing high availability and data durability.
    • stores records of a stream for up to 24 hours, by default, from the time they are added to the stream. The limit can be raised to up to 7 days by enabling extended data retention.
  • Data such as clickstreams, application logs, social media, etc can be added from multiple sources and within seconds is available for processing to the Kinesis Applications.
  • Kinesis provides the ordering of records, as well as the ability to read and/or replay records in the same order to multiple applications.
  • Kinesis is designed to process streaming big data and the pricing model allows heavy PUTs rate.
  • Multiple Kinesis Data Streams applications can consume data from a stream, so that multiple actions, like archiving and processing, can take place concurrently and independently.
  • Kinesis Data Streams application can start consuming the data from the stream almost immediately after the data is added and put-to-get delay is typically less than 1 second.
  • Kinesis Streams is useful for rapidly moving data off data producers and then continuously processing the data, be it to transform the data before emitting to a data store, run real-time metrics and analytics, or derive more complex data streams for further processing
    • Accelerated log and data feed intake: Data producers can push data to Kinesis stream as soon as it is produced, preventing any data loss and making it available for processing within seconds.
    • Real-time metrics and reporting: Metrics can be extracted and used to generate reports from data in real-time.
    • Real-time data analytics: Run real-time streaming data analytics.
    • Complex stream processing: Create Directed Acyclic Graphs (DAGs) of Kinesis Applications and data streams, with Kinesis applications adding to another Amazon Kinesis stream for further processing, enabling successive stages of stream processing.
  • Kinesis limits
    • stores records of a stream for up to 24 hours, by default, which can be extended to max 365 days
    • maximum size of a data blob (the data payload before Base64-encoding) within one record is 1 megabyte (MB)
    • Each shard can support up to 1000 PUT records per second.
  • S3 is a cost-effective way to store the data, but not designed to handle a stream of data in real-time

Kinesis Data Streams Terminology

 

Kinesis Architecture

 

  • Data Record
    • A record is the unit of data stored in a Kinesis data stream.
    • A record is composed of a sequence number, partition key, and data blob, which is an immutable sequence of bytes.
    • Maximum size of a data blob is 1 MB
    • Partition key
      • Partition key is used to segregate and route records to different shards of a stream.
      • A partition key is specified by the data producer while adding data to a Kinesis stream.
    • Sequence number
      • A sequence number is a unique identifier for each record.
      • Kinesis assigns a Sequence number, when a data producer calls PutRecord or PutRecords operation to add data to a stream.
      • Sequence numbers for the same partition key generally increase over time; the longer the time period between PutRecord or PutRecords requests, the larger the sequence numbers become.
  • Data Stream
    • Data stream represents a group of data records.
    • Data records in a data stream are distributed into shards.
  • Shard
    • A shard is a uniquely identified sequence of data records in a stream.
    • Streams are made of shards and are the base throughput unit of a Kinesis stream, as pricing is per shard basis.
    • Each shard supports up to 5 transactions per second for reads, up to a maximum total data read rate of 2 MB per second, and up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second (including partition keys)
    • Each shard provides a fixed unit of capacity. If the limits are exceeded, either by data throughput or the number of PUT records, the put data call will be rejected with a ProvisionedThroughputExceeded exception.
    • This can be handled by
      • Implementing a retry on the data producer side, if this is due to a temporary rise of the stream’s input data rate
      • Dynamically scaling the number of shared (resharding) to provide enough capacity for the put data calls to consistently succeed
  • Capacity Mode
    • A data stream capacity mode determines the pricing and how the capacity is managed
    • Kinesis Data Streams currently supports an on-demand mode and a provisioned mode
      • On-demand mode,
        • KDS automatically manages the shards in order to provide the necessary throughput.
        • You are charged only for the actual throughput used and KDS automatically accommodates the workloads’ throughput needs as they ramp up or down.
      • Provisioned mode
        • Number of shards for the data stream must be specified.
        • Total capacity of a data stream is the sum of the capacities of its shards.
        • Shards can be increased or decreased in a data stream as needed and you are charged for the number of shards at an hourly rate.
  • Retention Period
    • All data is stored for 24 hours, by default, and can be increased to 8760 hours (365 days) 168 hours (7 days) maximum.
  • Producers
    • A producer puts data records into Kinesis data streams. 
    • To put data into the stream, the name of the stream, a partition key, and the data blob to be added to the stream should be specified.
    • Partition key is used to determine which shard in the stream the data record is added to.
  • Consumers
    • A consumer is an application built to read and process data records from Kinesis data streams.

Kinesis Security

  • supports Server-side encryption using Key Management Service (KMS) for encrypting the data at rest.
  • supports writing encrypted data to a data stream by encrypting and decrypting on the client side.
  • supports encryption in transit using HTTPS endpoints.
  • supports Interface VPC endpoint to keep traffic between VPC and Kinesis Data Streams from leaving the Amazon network. Interface VPC endpoints don’t require an IGW, NAT device, VPN connection, or Direct Connect.
  • integrated with IAM to control access to Kinesis Data Streams resources.
  • integrated with CloudTrail, which provides a record of actions taken by a user, role, or an AWS service in Kinesis Data Streams.

Kinesis Producer

Data to Kinesis Data Streams can be added via API/SDK (PutRecord and PutRecords) operations, Kinesis Producer Library (KPL), or Kinesis Agent.

  • API
    • PutRecordPutRecords operations are synchronous operation that sends single/multiple records to the stream per HTTP request.
    • use PutRecords to achieve a higher throughput per data producer
    • helps manage many aspects of Kinesis Data Streams (including creating streams, resharding, and putting and getting records)
  • Kinesis Agent
    • is a pre-built Java application that offers an easy way to collect and send data to the Kinesis stream.
    • can be installed on Linux-based server environments such as web servers, log servers, and database servers
    • can be configured to monitor certain files on the disk and then continuously send new data to the Kinesis stream
  • Kinesis Producer Library (KPL)
    • is an easy-to-use and highly configurable library that helps to put data into a Kinesis stream.
    • provides a layer of abstraction specifically for ingesting data
    • presents a simple, asynchronous, and reliable interface that helps achieve high producer throughput with minimal client resources.
    • batches messages, as it aggregates records to increase payload size and improve throughput.
    • Collects records and uses PutRecords to write multiple records to multiple shards per request 
    • Writes to one or more Kinesis data streams with an automatic and configurable retry mechanism.
    • Integrates seamlessly with the Kinesis Client Library (KCL) to de-aggregate batched records on the consumer
    • Submits CloudWatch metrics to provide visibility into performance
  • Third Party and Open source
    • Log4j appender
    • Apache Kafka
    • Flume, fluentd, etc.

Kinesis Consumers

  • Kinesis Application is a data consumer that reads and processes data from a Kinesis Data Stream and can be built using either Kinesis API or Kinesis Client Library (KCL)
  • Shards in a stream provide 2 MB/sec of read throughput per shard, by default, which is shared by all the consumers reading from a given shard.
  • Kinesis Client Library (KCL)
    • is a pre-built library with multiple language support
    • delivers all records for a given partition key to same record processor
    • makes it easier to build multiple applications reading from the same stream for e.g. to perform counting, aggregation, and filtering
    • handles complex issues such as adapting to changes in stream volume, load-balancing streaming data, coordinating distributed services, and processing data with fault-tolerance
    • uses a unique DynamoDB table to keep track of the application’s state, so if the Kinesis Data Streams application receives provisioned-throughput exceptions, increase the provisioned throughput for the DynamoDB table
  • Kinesis Connector Library
    • is a pre-built library that helps you easily integrate Kinesis Streams with other AWS services and third-party tools.
    • Kinesis Client Library is required for Kinesis Connector Library
    • is legacy and can be replaced by Lambda or Kinesis Data Firehose
  • Kinesis Storm Spout is a pre-built library that helps you easily integrate Kinesis Streams with Apache Storm
  • AWS Lambda, Kinesis Data Firehose, and Kinesis Data Analytics also act as consumers for Kinesis Data Streams

Kinesis Enhanced fan-out

  • allows customers to scale the number of consumers reading from a data stream in parallel, while maintaining high performance and without contending for read throughput with other consumers.
  • provides logical 2 MB/sec throughput pipes between consumers and shards for Kinesis Data Streams Consumers.

AWS Kinesis Shared Throughput vs Enhanced Fan-out

Kinesis Data Streams Sharding

  • Resharding helps to increase or decrease the number of shards in a stream in order to adapt to changes in the rate of data flowing through the stream.
  • Resharding operations support shard split and shard merge.
    • Shard split helps divide a single shard into two shards. It increases the capacity and the cost.
    • Shard merge helps combine two shards into a single shard. It reduces the capacity and the cost.
  • Resharding is always pairwise and always involves two shards.
  • The shard or pair of shards that the resharding operation acts on are referred to as parent shards. The shard or pair of shards that result from the resharding operation are referred to as child shards.
  • Kinesis Client Library tracks the shards in the stream using a DynamoDB table and discovers the new shards and populates new rows in the table.
  • KCL ensures that any data that existed in shards prior to the resharding is processed before the data from the new shards, thereby, preserving the order in which data records were added to the stream for a particular partition key.
  • Data records in the parent shard are accessible from the time they are added to the stream to the current retention period.

Kinesis Data Streams vs Kinesis Firehose

Refer post @ Kinesis Data Streams vs Kinesis Firehose

Kinesis Data Streams vs. Firehose

Kinesis Data Streams vs SQS

Refer post @ Kinesis Data Streams vs SQS

Kinesis vs S3

Amazon Kinesis vs S3

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?
    1. Amazon Kinesis
    2. AWS Data Pipeline
    3. Amazon AppStream
    4. Amazon Simple Queue Service
  2. You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile devices. The votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use?
    1. Amazon DynamoDB
    2. Amazon Redshift
    3. Amazon Kinesis
    4. Amazon Simple Queue Service
  3. Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets. Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable, elastic and parallel. The results of the analytic processing should be persisted for data mining. Which architecture outlined below will meet the initial requirements for the collection platform?
    1. Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
    2. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR. (refer link)
    3. Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
    4. Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.
  4. Your customer is willing to consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?
    1. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
    2. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs (Can perform real time analysis and stores data for 24 hours which can be extended to 7 days)
    3. Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs (CloudTrail is only for auditing)
    4. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs (EMR is for batch analysis)
  5. You require the ability to analyze a customer’s clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data?
    1. Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
    2. Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
    3. Write click events directly to Amazon Redshift and then analyze with SQL
    4. Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon RDS and analyze with SQL
  6. Your social media monitoring application uses a Python app running on AWS Elastic Beanstalk to inject tweets, Facebook updates and RSS feeds into an Amazon Kinesis stream. A second AWS Elastic Beanstalk app generates key performance indicators into an Amazon DynamoDB table and powers a dashboard application. What is the most efficient option to prevent any data loss for this application?
    1. Use AWS Data Pipeline to replicate your DynamoDB tables into another region.
    2. Use the second AWS Elastic Beanstalk app to store a backup of Kinesis data onto Amazon Elastic Block Store (EBS), and then create snapshots from your Amazon EBS volumes.
    3. Add a second Amazon Kinesis stream in another Availability Zone and use AWS data pipeline to replicate data across Kinesis streams.
    4. Add a third AWS Elastic Beanstalk app that uses the Amazon Kinesis S3 connector to archive data from Amazon Kinesis into Amazon S3.
  7. You need to replicate API calls across two systems in real time. What tool should you use as a buffer and transport mechanism for API call events?
    1. AWS SQS
    2. AWS Lambda
    3. AWS Kinesis (AWS Kinesis is an event stream service. Streams can act as buffers and transport across systems for in-order programmatic events, making it ideal for replicating API calls across systems)
    4. AWS SNS
  8. You need to perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first?
    1. Kinesis Firehose + RDS
    2. Kinesis Firehose + RedShift (Kinesis Firehose provides a managed service for aggregating streaming data and inserting it into RedShift. RedShift also supports ad-hoc queries over well-structured data using a SQL-compliant wire protocol, so the business team should be able to adopt this system easily. Refer link)
    3. EMR using Hive
    4. EMR running Apache Spark

References

AWS Kinesis Data Streams vs Kinesis Data Firehose

Kinesis Data Streams vs. Kinesis Data Firehose

AWS Kinesis Data Streams vs Kinesis Data Firehose

  • Kinesis acts as a highly available conduit to stream messages between data producers and data consumers.
  • Data producers can be almost any source of data: system or web log data, social network data, financial trading information, geospatial data, mobile app data, or telemetry from connected IoT devices.
  • Data consumers will typically fall into the category of data processing and storage applications such as Apache Hadoop, Apache Storm, S3, and ElasticSearch.

Kinesis Data Streams vs. Firehose

Purpose

  • Kinesis data streams is highly customizable and best suited for developers building custom applications or streaming data for specialized needs.
  • Kinesis Data Firehose handles loading data streams directly into AWS products for processing.  Firehose also allows for streaming to S3, OpenSearch Service, or Redshift, where data can be copied for processing through additional services.

Provisioning & Scaling

  • Kinesis Data Streams needs you to configure shards, scale, and write your own custom applications for both producers and consumers. KDS requires manual scaling and provisioning.
  • Kinesis Data Firehose is fully managed and sends data to S3, Redshift, and OpenSearch. Scaling is handled automatically, up to gigabytes per second, and allows for batching, encrypting, and compressing.

Processing Delay

  • Kinesis Data Streams provides real-time processing with ~200 ms for shared throughput classic single consumer and ~70 ms for the enhanced fan-out consumer.
  • Kinesis Data Firehose provides near real-time processing with the lowest buffer time of 1 min.

Data Storage

  • Kinesis Data Streams provide data storage. Data typically is made available in a stream for 24 hours, but for an additional cost, users can gain data availability for up to 365 days.
  • Kinesis Data Firehose does not provide data storage.

Replay

  • Kinesis Data Streams supports replay capability
  • Kinesis Data Firehose does not support replay capability

Producers & Consumers

  • Kinesis Data Streams & Kinesis Data Firehose support multiple producer options including SDK, KPL, Kinesis Agent, IoT, etc.
  • Kinesis Data Streams support multiple consumers option including SDK, KCL, and Lambda, and can write data to multiple destinations. However, they have to be coded. Kinesis Data Firehose consumers are close-ended and support destinations like S3, Redshift, OpenSearch, and other third-party tools.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your organization needs to ingest a big data stream into its data lake on Amazon S3. The data may stream in at a rate of hundreds of megabytes per second. What AWS service will accomplish the goal with the least amount of management?
    1. Amazon Kinesis Firehose
    2. Amazon Kinesis Streams
    3. Amazon CloudFront
    4. Amazon SQS
  2. Your organization is looking for a solution that can help the business with streaming data several services will require access to read and process the same stream concurrently. What AWS service meets the business requirements?
    1. Amazon Kinesis Firehose
    2. Amazon Kinesis Streams
    3. Amazon CloudFront
    4. Amazon SQS
  3. Your application generates a 1 KB JSON payload that needs to be queued and delivered to EC2 instances for applications. At the end of the day, the application needs to replay the data for the past 24 hours. In the near future, you also need the ability for other multiple EC2 applications to consume the same stream concurrently. What is the best solution for this?
    1. Kinesis Data Streams
    2. Kinesis Firehose
    3. SNS
    4. SQS

References

AWS Kinesis Data Firehose – KDF

Kinesis Data Firehose

AWS Kinesis Data Firehose – KDF

  • Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data
  • Kinesis Data Firehose automatically scales to match the throughput of the data and requires no ongoing administration or need to write applications or manage resources
  • is a data transfer solution for delivering real-time streaming data to destinations such as S3,  Redshift,  Elasticsearch service, and Splunk.
  • is NOT Real Time, but Near Real Time as it supports batching and buffers streaming data to a certain size (Buffer Size in MBs) or for a certain period of time (Buffer Interval in seconds) before delivering it to destinations.
  • supports data compression, minimizing the amount of storage used at the destination. It currently supports GZIP, ZIP, and SNAPPY compression formats. Only GZIP is supported if the data is further loaded to Redshift.
  • supports data at rest encryption using KMS after the data is delivered to the S3 bucket.
  • supports multiple producers as datasource, which include Kinesis data stream, Kinesis Agent, or the Kinesis Data Firehose API using the AWS SDK, CloudWatch Logs, CloudWatch Events, or AWS IoT
  • supports out of box data transformation as well as custom transformation using the Lambda function to transform incoming source data and deliver the transformed data to destinations
  • supports source record backup with custom data transformation with Lambda, where Kinesis Data Firehose will deliver the un-transformed incoming data to a separate S3 bucket.
  • uses at least once semantics for data delivery. In rare circumstances such as request timeout upon data delivery attempt, delivery retry by Firehose could introduce duplicates if the previous request eventually goes through.
  • supports Interface VPC Interface Endpoint (AWS Private Link) to keep traffic between the VPC and Kinesis Data Firehose from leaving the Amazon network.

Kinesis Data Firehose

Kinesis Key Concepts

  • Kinesis Data Firehose delivery stream
    • Underlying entity of Kinesis Data Firehose, where the data is sent
  • Record
    • Data sent by data producer to a Kinesis Data Firehose delivery stream
    • Maximum size of a record (before Base64-encoding) is 1024 KB.
  • Data producer
    • Producers send records to Kinesis Data Firehose delivery streams.
  • Buffer size and buffer interval
    • Kinesis Data Firehose buffers incoming streaming data to a certain size or for a certain time period before delivering it to destinations
    • Buffer size and buffer interval can be configured while creating the delivery stream
    • Buffer size is in MBs and ranges from 1MB to 128MB for the S3 destination and 1MB to 100MB for the OpenSearch Service destination.
    • Buffer interval is in seconds and ranges from 60 secs to 900 secs
    • Firehose raises buffer size dynamically to catch up and make sure that all data is delivered to the destination, if data delivery to the destination is falling behind data writing to the delivery stream
    • Buffer size is applied before compression.
  • Destination
    • A destination is the data store where the data will be delivered.
    • supports S3,  Redshift, Elasticsearch, and Splunk as destinations.

Kinesis Data Firehose vs Kinesis Data Streams

Kinesis Data Streams vs. Kinesis Data Firehose

AWS Certification Exam Practice Questions

  1. A user is designing a new service that receives location updates from 3600 rental cars every hour. The cars location needs to be uploaded to an Amazon S3 bucket. Each location must also be checked for distance from the original rental location. Which services will process the updates and automatically scale? ​
    1. Amazon EC2 and Amazon EBS
    2. Amazon Kinesis Firehose and Amazon S3
    3. Amazon ECS and Amazon RDS
    4. Amazon S3 events and AWS Lambda
  2. You need to perform ad-hoc SQL queries on massive amounts of well-structured data. Additional data comes in constantly at a high velocity, and you don’t want to have to manage the infrastructure processing it if possible. Which solution should you use?
    1. Kinesis Firehose and RDS
    2. EMR running Apache Spark
    3. Kinesis Firehose and Redshift
    4. EMR using Hive
  3. Your organization needs to ingest a big data stream into their data lake on Amazon S3. The data may stream in at a rate of hundreds of megabytes per second. What AWS service will accomplish the goal with the least amount of management?
    1. Amazon Kinesis Firehose
    2. Amazon Kinesis Streams
    3. Amazon CloudFront
    4. Amazon SQS
  4. A startup company is building an application to track the high scores for a popular video game. Their Solution Architect is tasked with designing a solution to allow real-time processing of scores from millions of players worldwide. Which AWS service should the Architect use to provide reliable data ingestion from the video game into the datastore?
    1. AWS Data Pipeline
    2. Amazon Kinesis Firehose
    3. Amazon DynamoDB Streams
    4. Amazon Elasticsearch Service
  5. A company has an infrastructure that consists of machines which keep sending log information every 5 minutes. The number of these machines can run into thousands and it is required to ensure that the data can be analyzed at a later stage. Which of the following would help in fulfilling this requirement?
    1. Use Kinesis Firehose with S3 to take the logs and store them in S3 for further processing.
    2. Launch an Elastic Beanstalk application to take the processing job of the logs.
    3. Launch an EC2 instance with enough EBS volumes to consume the logs which can be used for further processing.
    4. Use CloudTrail to store all the logs which can be analyzed at a later stage.

References

AWS SQS Standard Queue

AWS SQS Standard Queue

  • SQS offers standard as the default queue type.
  • Standard queues support at-least-once message delivery. However, occasionally (because of the highly distributed architecture that allows nearly unlimited throughput), more than one copy of a message might be delivered out of order.
  • Standard queues support a nearly unlimited number of API calls per second, per API action (SendMessageReceiveMessage, or DeleteMessage).
  • Standard queues provide best-effort ordering which ensures that messages are generally delivered in the same order as they’re sent.

SQS Standard Queue Features

Redundant infrastructure

  • offers reliable and scalable hosted queues for storing messages
  • is engineered to always be available and deliver messages
  • provides the ability to store messages in a fail safe queue
  • highly concurrent access to messages

At-Least-Once delivery

  • ensures delivery of each message at least once
  • stores copies of the messages on multiple servers for redundancy and high availability
  • might deliver duplicate copy of messages, if the servers storing a copy of a message is unavailable when you receive or delete the message and the copy of the message is not deleted on that unavailable server
  • Applications should be designed to be idempotent with the ability to handle duplicate messages and not be adversely affected if it processes the same message more than once

Message Attributes

  • SQS messages can contain up to 10 metadata attributes.
  • take the form of name-type-value triples
  • can be used to separate the body of a message from the metadata that describes it.
  • helps process and store information with greater speed and efficiency because the applications don’t have to inspect an entire message before understanding how to process it

Message Sampling

  • behavior of retrieving messages from the queue depends on whether short (standard) polling, the default behavior, or long polling is used
  • With short polling,
    • samples only a subset of the servers (based on a weighted random distribution) and returns messages from just those servers.
    • A receive request might not return all the messages in the queue. But a subsequent receive request would return the message
  • With Long polling,
    • request persists for the time specified and returns as soon as the message is available thereby reducing costs and time the message has to dwell in the queue
    • long polling doesn’t return a response until a message arrives in the message queue, or the long poll times out.
    • makes it inexpensive to retrieve messages from the SQS queue as soon as the messages are available.
    • might help reduce the cost of using SQS, as the number of empty receives are reduced

Batching

  • SQS allows send, receive and delete batching, which helps club up to 10 messages in a single batch while charging price for a single message
  • helps lower cost and also increases the throughput

Configurable settings per queue

  • All queues don’t have to be alike

Order

  • makes a best effort to preserve order in messages does not guarantee first in, first out delivery of messages
  • can be handled by placing sequencing information within the message and performing the ordering on the client side

Loose coupling

  • removes tight coupling between components
  • provides the ability to move data between distributed components of the applications that perform different tasks without losing messages or requiring each component to be always available

Multiple writers and readers

  • supports multiple readers and writers interacting with the same queue as the same time
  • locks the message during processing, using Visibility Timeout, preventing it to be processed by any other consumer

Variable message size

  • supports message in any format up to 256KB of text.
  • messages larger than 256 KB can be managed using the S3 or DynamoDB, with SQS holding a pointer to the S3 object

Access Control

  • Access can be controlled for who can produce and consume messages to each queue

Delay Queues

  • Delay queue allows the user to set a default delay on a queue such that delivery of all messages enqueued is postponed for that time duration

Dead Letter Queues

  • Dead letter queue is a queue for messages that were not able to be processed after a maximum number of attempts
  • useful to isolate messages that can’t be processed for later analysis.

PCI Compliance

  • supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being PCI-DSS (Payment Card Industry – Data Security Standard) compliant

SQS Standard Queues vs SQS FIFO Queues

SQS Standard vs FIFO Queues

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_SQS_Standard_Queue

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Learning Path

AWS SysOps Administor - Associate SOA-C02 Certification

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Learning Path

  • I recently recertified for the AWS Certified SysOps Administrator – Associate (SOA-C02) exam.
  • SOA-C02 is the updated version of the SOA-C01 AWS exam with hands-on labs included, which is the first with AWS.

NOTE: As of March 28, 2023, the AWS Certified SysOps Administrator – Associate exam will not include exam labs until further notice. This removal of exam labs is temporary while we evaluate the exam labs and make improvements to provide an optimal candidate experience.

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Content

  • AWS SysOps Administrator – Associate SOA-C02 is intended for system administrators in a cloud operations role.
  • SOA-C02 validates a candidate’s ability to deploy, manage, and operate workloads on AWS which includes
    • Deploy, manage, and operate workloads on AWS
    • Support and maintain AWS workloads according to the AWS Well-Architected Framework
    • Perform operations by using the AWS Management Console and the AWS CLI
    • Implement security controls to meet compliance requirements
    • Monitor, log, and troubleshoot systems
    • Apply networking concepts (for example, DNS, TCP/IP, firewalls)
    • Implement architectural requirements (for example, high availability, performance, capacity)
    • Perform business continuity and disaster recovery procedures
    • Identify, classify, and remediate incidents

Refer AWS Certified SysOps – Associate (SOA-C02) Exam Guide

SOA-C02 Exam Domains

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Summary

  • SOA-C02 is the first AWS exam that included 2 sections
    • Objective questions
    • Hands-on labs
  • With Labs
    • SOA-C02 Exam consists of around 50 objective-type questions and 3 Hands-on labs to be answered in 190 minutes.
    • Labs are performed in a separate instance. Copy-paste works, so make sure you copy the exact names on resource creation.
    • Labs are pretty easy if you have worked on AWS.
    • Plan to leave 20 minutes to complete each exam lab.
    • NOTE: Once you complete a section and click next you cannot go back to the section. The same is for the labs. Once a lab is completed, you cannot return back to the lab.
    • Practice the Sample Lab provided when you book the exam, which would give you a feel of how the hands-on exam would actually be.
  • Without Labs
    • SOA-C02 exam consists of 65 questions in 130 minutes, and the time is more than sufficient if you are well-prepared.
  • SOA-C02 exam includes two types of questions, multiple-choice and multiple-response.
  • SOA-C02 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 720.
  • Associate exams currently cost $ 150 + tax.
  • You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
  • AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
  • Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Resources

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Topics

SOA-C02 mainly focuses on SysOps and DevOps tools in AWS and the ability to deploy, manage, operate, and automate workloads on AWS.

Management & Governance Tools

  • CloudFormation
    • provides an easy way to create and manage a collection of related AWS resources, provision and update them in an orderly and predictable fashion.
    • CloudFormation Concepts cover
      • Templates act as a blueprint for provisioning of AWS resources
      • Stacks are collection of resources as a single unit, that can be created, updated, and deleted by creating, updating, and deleting stacks.
      • Change Sets present a summary or preview of the proposed changes that CloudFormation will make when a stack is updated.
      • Nested stacks are stacks created as part of other stacks.
    • CloudFormation template anatomy consists of resources, parameters, outputs, and mappings.
    • CloudFormation supports multiple features
      • Drift detection enables you to detect whether a stack’s actual configuration differs, or has drifted, from its expected configuration.
      • Termination protection helps prevent a stack from being accidentally deleted.
      • Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update.
      • StackSets help create, update, or delete stacks across multiple accounts and Regions with a single operation.
      • Helper scripts with creation policies can help wait for the completion of events before provisioning or marking resources complete.
      • DependsOn attribute can specify the resource creation order and control the creation of a specific resource follows another.
      • Update policy supports rolling and replacing updates with AutoScaling.
      • Deletion policies to help retain or backup resources during stack deletion.
      • Custom resources can be configured for uses cases not supported for e.g. retrieve AMI IDs or interact with external services
    • Understand CloudFormation Best Practices esp. Nested Stacks and logical grouping
  • Elastic Beanstalk helps to quickly deploy and manage applications in the AWS Cloud without having to worry about the infrastructure that runs those applications. 
  • OpsWorks is a configuration management service that helps to configure and operate applications in a cloud enterprise by using Chef.
  • Understand CloudFormation vs Elastic Beanstalk vs OpsWorks
  • AWS Organizations
    • Difference between Service Control Policies and IAM Policies
    • SCP provides the maximum permission that a user can have, however, the user still needs to be explicitly given IAM policy.
    • Consolidated billing enables consolidating payments from multiple AWS accounts and includes combined usage and volume discounts including sharing of Reserved Instances across accounts.
  • Systems Manager is the operations hub and provides various services like parameter store, patch manager
    • Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management. Does not support secrets rotation. Use Secrets Manager instead
    • Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
    • Patch Manager helps automate the process of patching managed instances with both security-related and other types of updates.
  • CloudWatch
    • collects monitoring and operational data in the form of logs, metrics, and events, and visualizes it.
      • EC2 metrics can track (disk, network, CPU, status checks) but do not capture metrics like memory, disk swap, disk storage, etc.
      • CloudWatch unified agent can be used to gather custom metrics like memory, disk swap, disk storage, etc.
      • CloudWatch Alarm actions can be configured to perform actions based on various metrics for e.g. CPU below 5%
      • CloudWatch alarm can monitor StatusCheckFailed_System status on an EC2 instance and automatically recover the instance if it becomes impaired due to an underlying hardware failure or a problem that requires AWS involvement to repair.
      • Know ELB monitoring
        • Load Balancer metrics SurgeQueueLength and SpilloverCount
        • HealthyHostCount, UnHealthyHostCount determines the number of healthy and unhealthy instances registered with the load balancer.
        • Reasons for 4XX and 5XX errors
    • CloudWatch logs can be used to monitor, store, and access log files from EC2 instances, CloudTrail, Route 53, and other sources. You can create metric filters over the logs.
    • CloudWatch Subscription Filters can be used to send logs to Kinesis Data Streams, Lambda, or Kinesis Data Firehose.
    • EventBridge (CloudWatch Events) is a serverless event bus service that makes it easy to connect applications with data from a variety of sources.
    • EventBridge or CloudWatch events can be used as a trigger for periodically scheduled events.
    • CloudWatch unified agent helps collect metrics and logs from EC2 instances and on-premises servers and push them to CloudWatch.
  • CloudTrail for audit and governance
    • With Organizations, the trail can be configured to log CloudTrail from all accounts to a central account.
    • CloudTrail log file integrity validation can be used to check whether a log file was modified, deleted, or unchanged after being delivered.
  • AWS Config is a fully managed service that provides AWS resource inventory, configuration history, and configuration change notifications to enable security, compliance, and governance.
    • supports managed as well as custom rules that can be evaluated on periodic basis or as the event occurs for compliance and trigger automatic remediation
    • Conformance pack is a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an account and a Region or across an organization in AWS Organizations.
  • Control Tower
    • to setup, govern, and secure a multi-account environment
    • strongly recommended guardrails cover EBS encryption
  • Service Catalog
    • allows organizations to create and manage catalogues of IT services that are approved for use on AWS with minimal permissions.
  • Trusted Advisor provides recommendations that help follow AWS best practices covering security, performance, cost, fault tolerance & service limits.
  • AWS Health Dashboard is the single place to learn about the availability and operations of AWS services.
  • Cost allocation tags can be used to differentiate resource costs and analyzed using Cost Explorer or on a Cost Allocation report.
  • Understand how to setup Billing Alerts using CloudWatch

Networking & Content Delivery

  • VPC – Virtual Private Cloud is a virtual network in AWS
    • Understand Public Subnet (has access to the Internet) vs Private Subnet (no access to the Internet)
    • Route table defines rules, termed as routes, which determine where network traffic from the subnet would be routed
    • Internet Gateway enables access to the internet
    • Bastion host – allow access to instances in the private subnet without directly exposing them to the internet.
    • NAT helps route traffic from private subnets to the internet
    • NAT instance vs NAT Gateway
    • Virtual Private Gateway – Connectivity between on-premises and VPC
    • Egress-Only Internet Gateway – relevant to IPv6 only to allow egress traffic from private subnet to internet, without allowing ingress traffic
    • VPC Flow Logs enables you to capture information about the IP traffic going to and from network interfaces in the VPC and can help in monitoring the traffic or troubleshooting any connectivity issues
    • Security Groups vs NACLs esp. Security Groups are stateful and NACLs are stateless.
    • VPC Peering provides a connection between two VPCs that enables routing of traffic between them using private IP addresses.
    • VPC Endpoints enables the creation of a private connection between VPC to supported AWS services and VPC endpoint services powered by PrivateLink using its private IP address
    • Ability to debug networking issues like EC2 not accessible, EC2 not reachable, or not able to communicate with others or Internet.
  • Route 53 provides a scalable DNS system
    • supports ALIAS record type helps map zone apex records to ELB, CloudFront, and S3 endpoints.
    • Understand Routing Policies and their use cases
      • Failover routing policy helps to configure active-passive failover.
      • Geolocation routing policy helps route traffic based on the location of the users.
      • Geoproximity routing policy helps route traffic based on the location of the resources and, optionally, shift traffic from resources in one location to resources in another.
      • Latency routing policy use with resources in multiple AWS Regions and you want to route traffic to the Region that provides the best latency with less round-trip time.
      • Weighted routing policy helps route traffic to multiple resources in specified proportions.
    • Focus on Weighted, Latency routing policies
  • Understand ELB, ALB, and NLB and what features they provide like
    • Understand keys differences ELB vs ALB vs NLB
    • ALB provides content and path routing
    • NLB provides the ability to give static IPs to the load balancer esp. if there is a requirement to whitelist IPs.
    • LB access logs provide the source IP address
    • supports Sticky sessions to enable the load balancer to bind a user’s session to a specific target.
  • Understand CloudFront and use cases
    • CloudFront can be used with S3 to expose static data and website
  • Know VPN and Direct Connect to provide AWS to on-premises connectivity. Not covered in detail.

Compute

  • Understand EC2 in depth
    • Understand EC2 instance types and use cases.
    • Understand EC2 purchase options esp. spot instances and improved reserved instances options.
    • Understand EC2 Metadata & Userdata.
    • Understand EC2 Security. 
      • Use IAM Role work with EC2 instances to access services
      • IAM Role can now be attached to stopped and runnings instances
    • AMIs provide the information required to launch an instance, which is a virtual server in the cloud.
      • AMIs are regional and can be shared publicly or with other accounts
      • Only AMIs with unencrypted volumes or encrypted with a CMK (customer-managed keys) can be shared.
      • The best practice is to use prebaked or golden images to reduce startup time for the applications. Leverage EC2 Image Builder.
    • Troubleshooting EC2 issues
      • RequestLimitExceeded
      • InstanceLimitExceeded – Concurrent running instance limit, default is 20, has been reached in a region. Request increase in limits.
      • InsufficientInstanceCapacity – AWS does not currently have enough available capacity to service the request. Change AZ or Instance Type.
    • Monitoring EC2 instances
      • System status checks failure – Stop and Start
      • Instance status checks failure – Reboot
    • EC2 supports Instance Recovery where the recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata.
    • EC2 Image Builder can be used to pre-baked images with software to speed up booting and launching time.
  • Understand Placement groups
    • Cluster Placement Group provide low latency, High-Performance Computing by the logical grouping of instances within a Single AZ
    • Spread Placement Groups is a group of instances that are each placed on distinct underlying hardware i.e. each instance on a distinct rack across AZ
    • Partition Placement Groups is a group of instances spread across partitions i.e. group of instances spread across racks across AZs
  • Understand Auto Scaling
    • Auto Scaling can be configured with multiple AZs for high availability to launch instances across multiple AZs
    • Auto Scaling attempts to distribute instances evenly between the AZs that are enabled for the Auto Scaling group
    • Auto Scaling supports
      • Dynamic scaling, which allows you to scale automatically in response to the changing demand
      • Schedule scaling, which allows you to scale the application in response to predictable load changes
      • Manual scaling can be performed by changing the desired capacity or adding and removing instances
    • Auto Scaling life cycle hooks can be used to perform activities before instance termination.
  • Understand Lambda and its use cases
    • Lambda functions can be hosted in VPC with internet access controlled by a NAT instance.
    • RDS Proxy acts as an intermediary between the application and an RDS database. RDS Proxy establishes and manages the necessary connection pools to the database so that the application creates fewer database connections.

Storage

  • S3 provides an object storage service
    • Understand storage classes with lifecycle policies
    • S3 data protection provides encryption at rest and encryption in transit
      • S3 default encryption can be used to encrypt the data with S3 bucket policies to prevent or reject unencrypted object uploads.
    • Multi-part handling for fault-tolerant and performant large file uploads
    • static website hosting, CORS
    • S3 Versioning can help recover from accidental deletes and overwrites.
    • Pre-Signed URLs for both upload and download
    • S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between the client and an S3 bucket using globally distributed edge locations in CloudFront.
  • Understand Glacier as archival storage. Glacier does not provide immediate access to the data even with expediated retrievals.
  • Understand EBS storage option
  • Storage Gateway allows storage of data in the AWS cloud for scalable and cost-effective storage while maintaining data security.
    •  Gateway-cached volumes stores data is stored in S3 and retains a copy of recently read data locally for low latency access to the frequently accessed data
    • Gateway-stored volumes maintain the entire data set locally to provide low latency access
  • EFS is a cost-optimized, serverless, scalable, and fully managed file storage for use with AWS Cloud and on-premises resources.
    • supports data at rest encryption only during the creation. After creation, the file system cannot be encrypted and must be copied over to a new encrypted disk.
    • supports General purpose and Max I/O performance mode.
    • If hitting PercentIOLimit issue move to Max I/O performance mode.
  • FSx makes it easy and cost-effective to launch, run, and scale feature-rich, high-performance file systems in the cloud
  • FSx for Windows supports SMB protocol and a Multi-AZ file system to provide high availability across multiple AZs.
  • AWS Backup can be used to automate backup for EC2 instances and EFS file systems
  • Data Lifecycle Manager to automate the creation, retention, and deletion of EBS snapshots and EBS-backed AMIs.
  • AWS DataSync automates moving data between on-premises storage and S3 or Elastic File System (EFS).

Databases

  • RDS provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks.
    • Understand RDS Multi-AZ vs Read Replicas and use cases
    • Multi-AZ deployment provides high availability, durability, and failover support
    • Read replicas enable increased scalability and database availability in the case of an AZ failure.
    • Automated backups and database change logs enable point-in-time recovery of the database during the backup retention period, up to the last five minutes of database usage.
  • Aurora is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine
    • Backtracking “rewinds” the DB cluster to the specified time and performs in-place restore and does not create a new instance.
    • Automated Backups that help restore the DB as a new instance
  • Know ElastiCache use cases, mainly for caching performance
    • Understand ElastiCache Redis vs Memcached
    • Redis provides Multi-AZ support helps provide high availability across AZs and Online resharding to dynamically scale.
    • ElastiCache can be used as a caching layer for RDS.
  • Know DynamoDB. Not covered in detail

Security

  • IAM provides Identity and Access Management services.
  • S3 Encryption supports data at rest and in transit encryption
    • Understand S3 with SSE, SSE-C, SSE-KMS
    • S3 default encryption can help encrypt objects, however, it does not encrypt existing objects before the setting was enabled. You can use S3 Inventory to list the objects and S3 Batch to encrypt them.
  • Understand KMS for key management and envelope encryption
    • KMS with imported customer key material does not support rotation and has to be done manually.
  • AWS WAF – Web Application Firewall helps protect the applications against common web exploits like XSS or SQL Injection and bots that may affect availability, compromise security, or consume excessive resources
  • AWS GuardDuty is a threat detection service that continuously monitors the AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.
  • AWS Secrets Manager can help securely expose credentials as well as rotate them.
    • Secrets Manager integrates with Lambda and supports credentials rotation
  • AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS
  • Amazon Inspector
    • is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.
    • automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
  • AWS Certificate Manager (ACM) handles the complexity of creating, storing, and renewing public and private SSL/TLS X.509 certificates and keys that protect the AWS websites and applications.
  • Know AWS Artifact as on-demand access to compliance reports

Analytics

  • Amazon Athena can be used to query S3 data without duplicating the data and using SQL queries
  • OpenSearch (Elasticsearch) service is a distributed search and analytics engine built on Apache Lucene.
    • Opensearch production setup would be 3 AZs, 3 dedicated master nodes, 6 nodes with two replicas in each AZ.

Integration Tools

  • Understand SQS as a message queuing service and SNS as pub/sub notification service
    • Focus on SQS as a decoupling service
    • Understand SQS FIFO, make sure you know the differences between standard and FIFO
  • Understand CloudWatch integration with SNS for notification

Practice Labs

  • Create IAM users, IAM roles with specific limited policies.
  • Create a private S3 bucket
    • enable versioning
    • enable default encryption
    • enable lifecycle policies to transition and expire the objects
    • enable same region replication
  • Create a public S3 bucket with static website hosting
  • Set up a VPC with public and private subnets with Routes, SGs, NACLs.
  • Set up a VPC with public and private subnets and enable communication from private subnets to the Internet using NAT gateway
  • Create EC2 instance, create a Snapshot and restore it as a new instance.
  • Set up Security Groups for ALB and Target Groups, and create ALB, Launch Template, Auto Scaling Group, and target groups with sample applications. Test the flow.
  • Create Multi-AZ RDS instance and instance force failover.
  • Set up SNS topic. Use Cloud Watch Metrics to create a CloudWatch alarm on specific thresholds and send notifications to the SNS topic
  • Set up SNS topic. Use Cloud Watch Logs to create a CloudWatch alarm on log patterns and send notifications to the SNS topic.
  • Update a CloudFormation template and re-run the stack and check the impact.
  • Use AWS Data Lifecycle Manager to define snapshot lifecycle.
  • Use AWS Backup to define EFS backup with hourly and daily backup rules.

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Day

  • Make sure you are relaxed and get some good night’s sleep. The exam is not tough if you are well-prepared.
  • If you are taking the AWS Online exam
    • Try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
    • The online verification process does take some time and usually, there are glitches.
    • Remember, you would not be allowed to take the take if you are late by more than 30 minutes.
    • Make sure you have your desk clear, no hand-watches, or external monitors, keep your phones away, and nobody can enter the room.

Finally, All the Best 🙂

 

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Learning Path

AWS DevOps - Professional DOP-C02 Certificate

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Learning Path

  • AWS Certified DevOps Engineer – Professional (DOP-C02) exam is the upgraded pattern of the DevOps Engineer – Professional (DOP-C01) exam which was released in March 2023.
  • I recently attempted the latest pattern and DOP-C02 is quite similar to DOP-C01 with the inclusion of new services and features.

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Content

  • AWS Certified DevOps Engineer – Professional (DOP-C02) exam is intended for individuals who perform a DevOps engineer role and focuses on provisioning, operating, and managing distributed systems and services on AWS.
  • DOP-C02 basically validates
    • Implement and manage continuous delivery systems and methodologies on AWS
    • Implement and automate security controls, governance processes, and compliance validation
    • Define and deploy monitoring, metrics, and logging systems on AWS
    • Implement systems that are highly available, scalable, and self-healing on the AWS platform
    • Design, manage, and maintain tools to automate operational processes

Refer to AWS Certified DevOps Engineer – Professional Exam Guide

AWS DevOps - Professional DOP-C02 Exam Domains

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Resources

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Summary

  • Professional exams are tough, lengthy, and tiresome. Most of the questions and answers options have a lot of prose and a lot of reading that needs to be done, so be sure you are prepared and manage your time well.
  • Each solution involves multiple AWS services.
  • DOP-C02 exam has 75 questions to be solved in 170 minutes. Only 65 affect your score, while 10 unscored questions are for evaluation for future use.
  • DOP-C02 exam includes two types of questions, multiple-choice and multiple-response.
  • DOP-C02 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 750.
  • Each question mainly touches multiple AWS services.
  • Professional exams currently cost $ 300 + tax.
  • You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
  • As always, mark the questions for review and move on and come back to them after you are done with all.
  • As always, having a rough architecture or mental picture of the setup helps focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.
  • AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
  • Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Topics

  • AWS Certified DevOps Engineer – Professional exam covers a lot of concepts and services related to Automation, Deployments, Disaster Recovery, HA, Monitoring, Logging, and Troubleshooting. It also covers security and compliance related topics.

Management & Governance tools

  • CloudFormation
    • provides an easy way to create and manage a collection of related AWS resources, provision and update them in an orderly and predictable fashion.
    • Make sure you have gone through and executed a CloudFormation template to provision AWS resources.
    • CloudFormation Concepts cover
      • Templates act as a blueprint for provisioning of AWS resources
      • Stacks are collection of resources as a single unit, that can be created, updated, and deleted by creating, updating, and deleting stacks.
      • Change Sets present a summary or preview of the proposed changes that CloudFormation will make when a stack is updated.
      • Nested stacks are stacks created as part of other stacks.
    • CloudFormation template anatomy consists of resources, parameters, outputs, and mappings.
    • CloudFormation supports multiple features
      • Drift detection enables you to detect whether a stack’s actual configuration differs, or has drifted, from its expected configuration.
      • Termination protection helps prevent a stack from being accidentally deleted.
      • Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update.
      • StackSets help create, update, or delete stacks across multiple accounts and Regions with a single operation.
      • Helper scripts with creation policies can help wait for the completion of events before provisioning or marking resources complete.
      • Update policy supports rolling and replacing updates with AutoScaling.
      • Deletion policies to help retain or backup resources during stack deletion.
      • Custom resources can be configured for uses cases not supported for e.g. retrieve AMI IDs or interact with external services
    • Understand CloudFormation Best Practices esp. Nested Stacks and logical grouping
  • Elastic Beanstalk
    • helps to quickly deploy and manage applications in the AWS Cloud without having to worry about the infrastructure that runs those applications. 
    • Understand Elastic Beanstalk overall – Applications, Versions, and Environments
    • Deployment strategies with their advantages and disadvantages
  • OpsWorks
    • is a configuration management service that helps to configure and operate applications in a cloud enterprise by using Chef.
    • Understand OpsWorks overall – stacks, layers, recipes
    • Understand OpsWorks Lifecycle events esp. the Configure event and how it can be used.
    • Understand OpsWorks Deployment Strategies
    • Know OpsWorks auto-healing and how to be notified for it.
  • Understand CloudFormation vs Elastic Beanstalk vs OpsWorks
  • AWS Organizations
  • Systems Manager
    • AWS Systems Manager and its various services like parameter store, patch manager
    • Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management. Does not support secrets rotation. Use Secrets Manager instead
    • Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
    • Patch Manager helps automate the process of patching managed instances with both security-related and other types of updates.
  • CloudWatch
    • supports monitoring, logging, and alerting.
    • CloudWatch logs can be used to monitor, store, and access log files from EC2 instances, CloudTrail, Route 53, and other sources. You can create metric filters over the logs.
    • CloudWatch Subscription Filters can be used to send logs to Kinesis Data Streams, Lambda, or Kinesis Data Firehose.
    • EventBridge (CloudWatch Events) is a serverless event bus service that makes it easy to connect applications with data from a variety of sources.
    • EventBridge or CloudWatch events can be used as a trigger for periodically scheduled events.
    • CloudWatch unified agent helps collect metrics and logs from EC2 instances and on-premises servers and push them to CloudWatch.
    • CloudWatch Synthetics helps create canaries, configurable scripts that run on a schedule, to monitor your endpoints and APIs
  • CloudTrail
    • for audit and governance
    • With Organizations, the trail can be configured to log CloudTrail from all accounts to a central account.
  • Config is a fully managed service that provides AWS resource inventory, configuration history, and configuration change notifications to enable security, compliance, and governance.
    • supports managed as well as custom rules that can be evaluated on periodic basis or as the event occurs for compliance and trigger automatic remediation
    • Conformance pack is a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an account and a Region or across an organization in AWS Organizations.
  • Control Tower
    • to setup, govern, and secure a multi-account environment
    • strongly recommended guardrails cover EBS encryption
  • Service Catalog
    • allows organizations to create and manage catalogues of IT services that are approved for use on AWS with minimal permissions.
  • Trusted Advisor
    • helps with cost optimization and service limits in addition to security, performance, and fault tolerance.
  • AWS Health Dashboard is the single place to learn about the availability and operations of AWS services.

Developer Tools

  • Know AWS Developer tools
  • CodeCommit is a secure, scalable, fully-managed source control service that helps to host secure and highly scalable private Git repositories.
    • can help handle deployments of code to different environments using same repository and different branches.
  • CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy.
  • CodeDeploy helps automate code deployments to any instance, including EC2 instances and instances running on-premises, Lambda, and ECS.
  • CodePipeline is a fully managed continuous delivery service that helps automate the release pipelines for fast and reliable application and infrastructure updates.
    • CodePipeline pipeline structure (Hint : run builds parallelly using runorder)
    • Understand how to configure notifications on events and failures
    • CodePipeline supports Manual Approval
  • CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store, publish, and share software packages used in their software development process.
  • CodeGuru provides intelligent recommendations to improve code quality and identify an application’s most expensive lines of code. Reviewer helps improve code quality and Profiler helps optimize performance for applications
  • EC2 Image Builder helps to automate the creation, management, and deployment of customized, secure, and up-to-date server images that are pre-installed and pre-configured with software and settings to meet specific IT standards.

Disaster Recovery

  • Disaster recovery is mainly covered as a part of Re-silent cloud solutions.
  • Disaster Recovery whitepaper, although outdated, make sure you understand the differences and implementation for each type esp. pilot light, warm standby w.r.t RTO, and RPO.
  • Compute
    • Make components available in an alternate region,
    • Backup and Restore using either snapshots or AMIs that can be restored.
    • Use minimal low-scale capacity running which can be scaled once the failover happens
    • Use fully running compute in active-active confirmation with health checks.
    • CloudFormation to create, and scale infra as needed
  • Storage
    • S3 and EFS support cross-region replication
    • DynamoDB supports Global tables for multi-master, active-active inter-region storage needs.
    • Aurora Global Database provides cross-region read replicas and failover capabilities.
    • RDS supports cross-region read replicas which can be promoted to master in case of a disaster. This can be done using Route 53, CloudWatch, and lambda functions.
  • Network
    • Route 53 failover routing with health checks to failover across regions.
    • CloudFront Origin Groups support primary and secondary endpoints with failover.

Networking & Content Delivery

  • Networking is covered very lightly.
  • VPC – Virtual Private Cloud
    • Security Groups, NACLs
      • NACLs are stateless and need to open ephemeral ports for response traffic.
    • VPC Gateway Endpoints to provide access to S3 and DynamoDB
    • VPC Interface Endpoints or PrivateLink provide access to a variety of services like SQS, Kinesis, or Private APIs exposed through NLB.
    • VPC Peering to enable communication between VPCs within the same or different regions.
    • VPC Peering does not support overlapping CIDRs while PrivateLink does as only the endpoint is exposed.
    • VPC Flow Logs to track network traffic and can be published to CloudWatch Logs, S3, or Kinesis Data Firehose.
    • NAT Gateway provides managed NAT service that provides better availability, higher bandwidth, and requires less administrative effort.
  • Route 53
    • Routing Policies
      • focus on Weighted, Latency, and failover routing policies
      • failover routing provides active-passive configuration for disaster recovery while the others are active-active configurations.
  • CloudFront
    • fully managed, fast CDN service that speeds up the distribution of static, dynamic web or streaming content to end-users.
  • Load Balancer – ELB, ALB and NLB
    • ELB with Auto Scaling to provide scalable and highly available applications
    • Understand ALB vs NLB and their use cases.
    • Access logs needs to be enabled and logs only to S3.
  • Direct Connect & VPN
    • provide on-premises to AWS connectivity
    • Understand Direct Connect vs VPN
    • VPN can provide a cost-effective, quick failover for Direct Connect.
    • VPN over Direct Connect provides a secure dedicated connection and requires a public virtual interface.

Security, Identity & Compliance

  • AWS Identity and Access Management
  • AWS WAF
    • protects from common attack techniques like SQL injection and XSS, Conditions based include IP addresses, HTTP headers, HTTP body, and URI strings.
    • integrates with CloudFront, ALB, and API Gateway.
  • AWS KMS – Key Management Service
    • managed encryption service that allows the creation and control of encryption keys to enable data encryption.
  • Secrets Manager
    • helps protect secrets needed to access applications, services, and IT resources.
  • AWS GuardDuty
    • is a threat detection service that continuously monitors the AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.
  • AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts and enables automated remediation.
  • Firewall Manager helps centrally configure and manage firewall rules across the accounts and applications in AWS Organizations which includes a variety of protections, including WAF, Shield Advanced, VPC security groups, Network Firewall, and Route 53 Resolver DNS Firewall.

Storage

Database

Compute

  • EC2
  • Auto Scaling provides the ability to ensure a correct number of EC2 instances are always running to handle the load of the application
    • Auto Scaling Lifecycle events enable performing custom actions by pausing instances as an ASG launches or terminates them.
    • Blue/green deployments with Auto Scaling – With new launch configurations, new auto-scaling groups, or CloudFormation update policies.
  • Lambda
    • offers Serverless computing 
    • helps define reserved concurrency limits to reduce the impact
    • Lambda Alias now supports canary deployments
    • Reserved Concurrency guarantees the maximum number of concurrent instances for the function
    • Provisioned Concurrency
      • provides greater control over the performance of serverless applications and helps keep functions initialized and hyper-ready to respond in double-digit milliseconds.
      • supports Application Auto Scaling.
  • Step Functions helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines.
  • ECS – Elastic Container Service
    • container management service that supports Docker containers
    • supports two launch types
      • EC2 and
      • Fargate which provides the serverless capability
  • ECR provides a fully managed, secure, scalable, reliable container image registry service. It supports lifecycle policies for images.

Integration Tools

  • SQS in terms of loose coupling and scaling.
    • Difference between SQS Standard and FIFO esp. with throughput and order
    • SQS supports dead letter queues and redrive policy which specifies the source queue, the dead-letter queue, and the conditions under which SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.
  • CloudWatch integration with SNS and Lambda for notifications.

Analytics

Whitepapers

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Day

  • Make sure you are relaxed and get some good night’s sleep. The exam is not tough if you are well-prepared.
  • If you are taking the AWS Online exam
    • Try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
    • The online verification process does take some time and usually, there are glitches.
    • Remember, you would not be allowed to take the take if you are late by more than 30 minutes.
    • Make sure you have your desk clear, no hand-watches, or external monitors, keep your phones away, and nobody can enter the room.

Finally, All the Best 🙂

  •  

AWS Compute Optimizer

AWS Compute Optimizer

  • AWS Compute Optimizer helps analyze the configuration and utilization metrics of the AWS resources.
  • reports whether the resources are optimal, and generates optimization recommendations to reduce the cost and improve the performance of the workloads.
  • delivers intuitive and easily actionable resource recommendations to help quickly identify optimal AWS resources for the workloads without requiring specialized expertise or investing substantial time and money.
  • provides a global, cross-account view of all resources
  • analyzes the specifications and the utilization metrics of the resources from CloudWatch for the last 14 days.
  • provides graphs showing recent utilization metric history data, as well as projected utilization for recommendations, which can be used to evaluate which recommendation provides the best price-performance trade-off.
  • Analysis and visualization of the usage patterns can help decide when to move or resize the running resources, and still meet your performance and capacity requirements.
  • generates recommendations for the following resources:

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company must assess the business’s EC2 instances and Elastic Block Store (EBS) volumes to determine how effectively the business is using resources. The company has not detected a pattern in how these EC2 instances are used by the apps that access the databases. Which option best fits these criteria in terms of cost-effectiveness?
    1. Use AWS Systems Manager OpsCenter.
    2. Use Amazon CloudWatch for detailed monitoring.
    3. Use AWS Compute Optimizer.
    4. Sign up for the AWS Enterprise Support plan. Turn on AWS Trusted Advisor.

References

AWS_Compute_Optimizer

AWS Auto Scaling Launch Template vs Launch Configuration

Auto Scaling Launch Template vs Launch Configuration

Auto Scaling Launch Template vs Launch Configuration

Auto Scaling Launch Template vs Launch Configuration

Launch Configuration

  • Launch configuration is an instance configuration template that an Auto Scaling Group uses to launch EC2 instances.
  • Launch configuration is similar to EC2 configuration and involves the selection of the Amazon Machine Image (AMI), block devices, key pair, instance type, security groups, user data, EC2 instance monitoring, instance profile, kernel, ramdisk, the instance tenancy, whether the instance has a public IP address, and is EBS-optimized.
  • Launch configuration can be associated with multiple ASGs
  • Launch configuration can’t be modified after creation and needs to be created new if any modification is required.
  • Basic or detailed monitoring for the instances in the ASG can be enabled when a launch configuration is created.
  • By default, basic monitoring is enabled when you create the launch configuration using the AWS Management Console, and detailed monitoring is enabled when you create the launch configuration using the AWS CLI or an API
  • AWS recommends using Launch Template instead.

Launch Template

  • A Launch Template is similar to a launch configuration, with additional features, and is recommended by AWS.
  • Launch Template allows multiple versions of a template to be defined.
  • With versioning, a subset of the full set of parameters can be created and then reused to create other templates or template versions for e.g, a default template that defines common configuration parameters can be created and allow the other parameters to be specified as part of another version of the same template.
  • Launch Template allows the selection of both Spot and On-Demand Instances or multiple instance types.
  • Launch templates support EC2 Dedicated Hosts. Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use.
  • Launch templates provide the following features
    • Support for multiple instance types and purchase options in a single ASG.
    • Launching Spot Instances with the capacity-optimized allocation strategy.
    • Support for launching instances into existing Capacity Reservations through an ASG.
    • Support for unlimited mode for burstable performance instances.
    • Support for Dedicated Hosts.
    • Combining CPU architectures such as Intel, AMD, and ARM (Graviton2)
    • Improved governance through IAM controls and versioning.
    • Automating instance deployment with Instance Refresh.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is launching a new workload. The workload will run on Amazon EC2 instances in an Amazon EC2 Auto Scaling group. The company needs to maintain different versions of the EC2 configurations. The company also needs the Auto Scaling group to automatically scale to maintain CPU utilization of 60%. How can a SysOps administrator meet these requirements?
    1. Configure the Auto Scaling group to use a launch configuration with a target tracking scaling policy.
    2. Configure the Auto Scaling group to use a launch configuration with a simple scaling policy.
    3. Configure the Auto Scaling group to use a launch template with a target tracking scaling policy.
    4. Configure the Auto Scaling group to use a launch template with a simple scaling policy.

References

AWS_Launch_Template

AWS Certified Developer – Associate DVA-C02 Exam Learning Path

AWS Certified Developer - Associate Certification

AWS Certified Developer – Associate DVA-C02 Exam Learning Path

  • AWS Certified Developer – Associate DVA-C02 exam is the latest AWS exam released on 27th February 2023 and has replaced the previous AWS Developer – Associate DVA-C01 certification exam.
  • I passed the AWS Developer – Associate DVA-C02 exam with a score of 835/1000.

AWS Certified Developer – Associate DVA-C02 Exam Content

  • DVA-C02 validates a candidate’s ability to demonstrate proficiency in developing, testing, deploying, and debugging AWS cloud-based applications.
  • DVA-C02 also validates a candidate’s ability to complete the following tasks:
    • Develop and optimize applications on AWS
    • Package and deploy by using continuous integration and continuous delivery (CI/CD) workflows
    • Secure application code and data
    • Identify and resolve application issues

Refer AWS Certified Developer – Associate Exam Blue Print

AWS Certified Developer - Associate Domains

AWS Certified Developer – Associate DVA-C02 Summary

  • DVA-C02 exam consists of 65 questions in 130 minutes, and the time is more than sufficient if you are well-prepared.
  • DVA-C02 exam includes two types of questions, multiple-choice and multiple-response.
  • DVA-C02 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 720.
  • Associate exams currently cost $ 150 + tax.
  • You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
  • AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
  • Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.

AWS Certified Developer – Associate DVA-C02 Exam Resources

AWS Certified Developer – Associate DVA-C02 Exam Topics

  • AWS DVA-C02 exam concepts cover solutions that fall within AWS Well-Architected framework to cover scalable, highly available, cost-effective, performant, and resilient pillars.
  • AWS Certified Developer – Associate DVA-C02 exam covers a lot of the latest AWS services like Amplify, X-Ray while focusing majorly on other services like Lambda, DynamoDB, Elastic Beanstalk, S3, EC2
  • AWS Certified Developer – Associate DVA-C02 exam is similar to DVA-C01 with more focus on the hands-on development and deployment concepts rather than just the architectural concepts.
  • If you had been preparing for the DVA-C01, DVA-C02 is pretty much similar except for the addition of some new services covering Amplify, X-Ray, etc.

Compute

  • Elastic Cloud Compute – EC2
  • Auto Scaling and ELB
    • Auto Scaling provides the ability to ensure a correct number of EC2 instances are always running to handle the load of the application
    • Elastic Load Balancer allows the incoming traffic to be distributed automatically across multiple healthy EC2 instances
  • Autoscaling & ELB
    • work together to provide High Availability and Scalability.
    • Span both ELB and Auto Scaling across Multi-AZs to provide High Availability
    • Do not span across regions. Use Route 53 or Global Accelerator to route traffic across regions.
  • Lambda and serverless architecture, its features, and use cases.
    • Lambda integrated with API Gateway to provide a serverless, highly scalable, cost-effective architecture.
    • Lambda execution role needs the required permissions to integrate with other AWS services.
    • Environment variables to keep functions configurable.
    • Lambda Layers provide a convenient way to package libraries and other dependencies that you can use with your Lambda functions.
    • Function versions can be used to manage the deployment of the functions.
    • Function Alias supports creating aliases, which are mutable, for each function version.
    • provides /tmp ephemeral scratch storage.
    • Integrates with X-Ray for distributed tracing.
    • Use RDS proxy for connection pooling.
  • Elastic Container Service – ECS with its ability to deploy containers and microservices architecture.
    • ECS role for tasks can be provided through taskRoleArn
    • ALB provides dynamic port mapping to allow multiple same tasks on the same node.
  • Elastic Kubernetes Service – EKS
    • managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers
    • ideal for migration of an existing workload on Kubernetes
  • Elastic Beanstalk
    • at a high level, what it provides, and its ability to get an application running quickly.
    • Deployment types with their advantages and disadvantages

Databases

Storage

  • Simple Storage Service – S3
    • S3 storage classes with lifecycle policies
      • Understand the difference between SA Standard vs SA IA vs SA IA One Zone in terms of cost and durability
    • S3 Data Protection
      • S3 Client-side encryption encrypts data before storing it in S3
      • S3 encryption in transit can be enforced with S3 bucket policies using secureTransport attributes.
      • S3 encryption at rest can be enforced with S3 bucket policies using x-amz-server-side-encryption attribute.
    • S3 features including
      • S3 provides cost-effective static website hosting. However, it does not support HTTPS endpoint. Can be integrated with CloudFront for HTTPS, caching, performance, and low-latency access.
      • S3 versioning provides protection against accidental overwrites and deletions. Used with MFA Delete feature.
      • S3 Pre-Signed URLs for both upload and download provide access without needing AWS credentials.
      • S3 CORS allows cross-domain calls
      • S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket.
      • S3 Event Notifications to trigger events on various S3 events like objects added or deleted. Supports SQS, SNS, and Lambda functions.
      • Integrates with Amazon Macie to detect PII data
      • Replication that supports the same and cross-region replication required versioning to be enabled.
      • Integrates with Athena to analyze data in S3 using standard SQL.
  • Instance Store
    •  is physically attached  to the EC2 instance and provides the lowest latency and highest IOPS
  • Elastic Block Storage – EBS
    • EBS volume types and their use cases in terms of IOPS and throughput. SSD for IOPS and HDD for throughput
  • Elastic File System – EFS
    • simple, fully managed, scalable, serverless, and cost-optimized file storage for use with AWS Cloud and on-premises resources.
    • provides shared volume across multiple EC2 instances, while EBS can be attached to a single instance within the same AZ or EBS Multi-Attach can be attached to multiple instances within the same AZ
    • can be mounted with Lambda functions
    • supports the NFS protocol, and is compatible with Linux-based AMIs
    • supports cross-region replication and storage classes for cost management.
  • Difference between EBS vs S3 vs EFS
  • Difference between EBS vs Instance Store
  • Would recommend referring Storage Options whitepaper, although a bit dated 90% still holds right

Security & Identity

  • Identity Access Management – IAM
    • IAM role
      • provides permissions that are not associated with a particular user, group, or service and are intended to be assumable by anyone who needs it.
      • can be used for EC2 application access and Cross-account access
    • IAM Best Practices
  • Cognito
    • provides authentication, authorization, and user management for the web and mobile apps.
    • User pools are user directories that provide sign-up and sign-in options for the app users.
    • Identity pools enable you to grant the users access to other AWS services.
  • Key Management Services – KMS encryption service
    • for key management and envelope encryption
    • provides encryption at rest and does not handle encryption in transit.
  • Amazon Certificate Manager – ACM
    • helps easily provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services and internally connected resources.
  • AWS Secrets Manager
    • helps protect secrets needed to access applications, services, and IT resources.
    • supports automatic rotations of secrets
  • Secrets Manager vs Systems Manager Parameter Store for secrets management
    • Secrets Manager supports automatic credentials rotation and is integrated with Lambda and other services like RDS, and DynamoDB.
    • Systems Manager Parameter Store provides free standard parameters and is cost-effective as compared to Secrets Manager.

Front-end Web and Mobile

  • API Gateway
    • is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale.
    • Powerful, flexible authentication mechanisms, such as AWS IAM policies, Lambda authorizer functions, and Amazon Cognito user pools.
    • supports Canary release deployments for safely rolling out changes.
    • define usage plans to meter, restrict third-party developer access, configure throttling, and quota limits on a per API key basis
    • integrates with AWS X-Ray for understanding and triaging performance latencies.
    • API Gateway CORS allows cross-domain calls
  • Amplify
    • is a complete solution that lets frontend web and mobile developers easily build, ship, and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as use cases evolve.

Management Tools

  • CloudWatch
    • monitoring to provide operational transparency
    • is extendable with custom metrics
    • does not capture memory metrics, by default, and can be done using the CloudWatch agent.
  • EventBridge
    • is a serverless event bus service that makes it easy to connect applications with data from a variety of sources.
    • enables building loosely coupled and distributed event-driven architectures.
  • CloudTrail
    • helps enable governance, compliance, and operational and risk auditing of the AWS account.
    • helps to get a history of AWS API calls and related events for the AWS account.
  • CloudFormation
    • easy way to create and manage a collection of related AWS resources, and provision and update them in an orderly and predictable fashion.
    • Supports Serverless Application Model – SAM for the deployment of serverless applications including Lambda.
    • CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and Regions with a single operation.

Integration Tools

  • Simple Queue Service
    • as message queuing service and SNS as pub/sub notification service
    • as a decoupling service and provide resiliency
    • SQS features like visibility, and long poll vs short poll
    • provide scaling for the Auto Scaling group based on the SQS size.
    • SQS Standard vs SQS FIFO difference
      • FIFO provides exactly-once delivery but with low throughput
  • Simple Notification Service – SNS
    • is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients
    • Fanout pattern can be used to push messages to multiple subscribers.
  • Understand SQS as a message queuing service and SNS as a pub/sub notification service.
  • Know AWS Developer tools
    • CodeCommit is a secure, scalable, fully-managed source control service that helps to host secure and highly scalable private Git repositories.
    • CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy.
    • CodeDeploy helps automate code deployments to any instance, including EC2 instances and instances running on-premises.
    • CodePipeline is a fully managed continuous delivery service that helps automate the release pipelines for fast and reliable application and infrastructure updates.
    • CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store, publish, and share software packages used in their software development process.
  • X-Ray
    • helps developers analyze and debug production, distributed applications for e.g. built using a microservices lambda architecture

Analytics

  • Redshift as a business intelligence tool
  • Kinesis
    • for real-time data capture and analytics.
    • Integrates with Lambda functions to perform transformations
  • AWS Glue
    • fully-managed, ETL service that automates the time-consuming steps of data preparation for analytics

Networking

  • Does not cover much networking or designing networks, but be sure you understand VPC, Subnets, Routes, Security Groups, etc.

AWS Cloud Computing Whitepapers

On the Exam Day

  • Make sure you are relaxed and get some good night’s sleep. The exam is not tough if you are well-prepared.
  • If you are taking the AWS Online exam
    • Try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
    • The online verification process does take some time and usually, there are glitches.
    • Remember, you would not be allowed to take the take if you are late by more than 30 minutes.
    • Make sure you have your desk clear, no hand-watches, or external monitors, keep your phones away, and nobody can enter the room.

Finally, All the Best 🙂

AWS Auto Scaling Policies

AWS Auto Scaling Policies

Maintain a Steady Count of Instances

  • Auto Scaling ensures a steady minimum (or desired if specified) count of Instances will always be running.
  • If an instance is found unhealthy, Auto Scaling will terminate the Instance and launch a new one.
  • ASG determines the health state of each instance by periodically checking the results of EC2 instance status checks.
  • ASG can be associated with an Elastic load balancer enabled to use the Elastic Load Balancing health check, Auto Scaling determines the health status of the instances by checking the results of both EC2 instance status and Elastic Load Balancing instance health.
  • Auto Scaling marks an instance unhealthy and launches a replacement if
    • the instance is in a state other than running,
    • the system status is impaired, or
    • Elastic Load Balancing reports the instance state as OutOfService.
  • After an instance has been marked unhealthy as a result of an EC2 or ELB health check, it is almost immediately scheduled for replacement. It never automatically recovers its health.
  • For an unhealthy instance, the instance’s health check can be changed back to healthy manually but you will encounter an error if the instance is already terminating.
  • Because the interval between marking an instance unhealthy and its actual termination is so small, attempting to set an instance’s health status back to healthy is probably useful only for a suspended group.
  • When the instance is terminated, any associated Elastic IP addresses are disassociated and are not automatically associated with the new instance.
  • Elastic IP addresses must be associated with the new instance manually.
  • Similarly, when the instance is terminated, its attached EBS volumes are detached and must be attached to the new instance manually.

Manual Scaling

  • Manual scaling can be performed by
    • Changing the desired capacity limit of the ASG
    • Attaching/Detaching instances to the ASG
  • Attaching/Detaching an EC2 instance can be done only if
    • Instance is in the running state.
    • AMI used to launch the instance must still exist.
    • Instance is not a member of another ASG.
    • Instance is in the same Availability Zone as the ASG.
    • If the ASG is associated with a load balancer, the instance and the load balancer must both be in the same VPC.
  • Auto Scaling increases the desired capacity of the group by the number of instances being attached. But if the number of instances being attached plus the desired capacity exceeds the maximum size, the request fails.
  • When Detaching instances, an option to decrement the desired capacity for the ASG by the number of instances being detached is provided. If chosen not to decrement the capacity, Auto Scaling launches new instances to replace the ones that you detached.
  • If an instance is detached from an ASG that is also registered with a load balancer, the instance is deregistered from the load balancer. If connection draining is enabled for the load balancer, Auto Scaling waits for the in-flight requests to complete.

Scheduled Scaling

  • Scaling based on a schedule allows you to scale the application in response to predictable load changes for e.g. last day of the month, the last day of a financial year.
  • Scheduled scaling requires the configuration of Scheduled actions, which tells Auto Scaling to perform a scaling action at a certain time in the future, with the start time at which the scaling action should take effect, and the new minimum, maximum, and desired size of group should have.
  • Auto Scaling guarantees the order of execution for scheduled actions within the same group, but not for scheduled actions across groups.
  • Multiple Scheduled Actions can be specified but should have unique time values and they cannot have overlapping times scheduled which will lead to their rejection.
  • Cooldown periods are not supported.

Dynamic Scaling

  • Allows automatic scaling in response to the changing demand for e.g. scale-out in case CPU utilization of the instance goes above 70% and scale in when the CPU utilization goes below 30%
  • ASG uses a combination of alarms & policies to determine when the conditions for scaling are met.
    • An alarm is an object that watches over a single metric over a specified time period. When the value of the metric breaches the defined threshold, for the number of specified time periods the alarm performs one or more actions (such as sending messages to Auto Scaling).
    • A policy is a set of instructions that tells Auto Scaling how to respond to alarm messages.
  • Dynamic scaling process works as below
    1. CloudWatch monitors the specified metrics for all the instances in the Auto Scaling Group.
    2. Changes are reflected in the metrics as the demand grows or shrinks
    3. When the change in the metrics breaches the threshold of the CloudWatch alarm, the CloudWatch alarm performs an action. Depending on the breach, the action is a message sent to either the scale-in policy or the scale-out policy
    4. After the Auto Scaling policy receives the message, Auto Scaling performs the scaling activity for the ASG.
    5. This process continues until you delete either the scaling policies or the ASG.
  • When a scaling policy is executed, if the capacity calculation produces a number outside of the minimum and maximum size range of the group, EC2 Auto Scaling ensures that the new capacity never goes outside of the minimum and maximum size limits.
  • When the desired capacity reaches the maximum size limit, scaling out stops. If demand drops and capacity decreases, Auto Scaling can scale out again.

Dynamic Scaling Policy Types

Target tracking scaling

  • Increase or decrease the current capacity of the group based on a target value for a specific metric.

Auto Scaling Target Tracking Scaling

Step scaling

  • Increase or decrease the current capacity of the group based on a set of scaling adjustments, known as step adjustments, that vary based on the size of the alarm breach.

Simple scaling

  • Increase or decrease the current capacity of the group based on a single scaling adjustment.

Multiple Policies

  • ASG can have more than one scaling policy attached at any given time.
  • Each ASG would have at least two policies: one to scale the architecture out and another to scale the architecture in.
  • If an ASG has multiple policies, there is always a chance that both policies can instruct the Auto Scaling to Scale Out or Scale In at the same time.
  • When these situations occur, Auto Scaling chooses the policy that has the greatest impact i.e. provides the largest capacity for both scale out and scale in on the ASG for e.g. if two policies are triggered at the same time and Policy 1 instructs to scale out the instance by 1 while Policy 2 instructs to scale out the instances by 2, Auto Scaling will use the Policy 2 and scale out the instances by 2 as it has a greater impact.

Predictive Scaling

  • Predictive scaling can be used to increase the number of EC2 instances in the ASG in advance of daily and weekly patterns in traffic flows.
  • Predictive scaling is well suited for situations where you have:
    • Cyclical traffic, such as high use of resources during regular business hours and low use of resources during evenings and weekends
    • Recurring on-and-off workload patterns, such as batch processing, testing, or periodic data analysis
    • Applications that take a long time to initialize, causing a noticeable latency impact on application performance during scale-out events
  • Predictive scaling provides proactive scaling that can help scale faster by launching capacity in advance of forecasted load, compared to using only dynamic scaling, which is reactive in nature.
  • Predictive scaling uses machine learning to predict capacity requirements based on historical data from CloudWatch. The machine learning algorithm consumes the available historical data and calculates the capacity that best fits the historical load pattern, and then continuously learns based on new data to make future forecasts more accurate.
  • Predictive scaling supports forecast only mode so that you can evaluate the forecast before you allow predictive scaling to actively scale capacity
  • When you are ready to start scaling with predictive scaling, switch the policy from forecast only mode to forecast and scale mode.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A user has created a web application with Auto Scaling. The user is regularly monitoring the application and he observed that the traffic is highest on Thursday and Friday between 8 AM to 6 PM. What is the best solution to handle scaling in this case?
    1. Add a new instance manually by 8 AM Thursday and terminate the same by 6 PM Friday
    2. Schedule Auto Scaling to scale up by 8 AM Thursday and scale down after 6 PM on Friday
    3. Schedule a policy which may scale up every day at 8 AM and scales down by 6 PM
    4. Configure a batch process to add a instance by 8 AM and remove it by Friday 6 PM
  2. A customer has a website which shows all the deals available across the market. The site experiences a load of 5 large EC2 instances generally. However, a week before Thanksgiving vacation they encounter a load of almost 20 large instances. The load during that period varies over the day based on the office timings. Which of the below mentioned solutions is cost effective as well as help the website achieve better performance?
    1. Keep only 10 instances running and manually launch 10 instances every day during office hours.
    2. Setup to run 10 instances during the pre-vacation period and only scale up during the office time by launching 10 more instances using the AutoScaling schedule.
    3. During the pre-vacation period setup a scenario where the organization has 15 instances running and 5 instances to scale up and down using Auto Scaling based on the network I/O policy.
    4. During the pre-vacation period setup 20 instances to run continuously.
  3. A user has setup Auto Scaling with ELB on the EC2 instances. The user wants to configure that whenever the CPU utilization is below 10%, Auto Scaling should remove one instance. How can the user configure this?
    1. The user can get an email using SNS when the CPU utilization is less than 10%. The user can use the desired capacity of Auto Scaling to remove the instance
    2. Use CloudWatch to monitor the data and Auto Scaling to remove the instances using scheduled actions
    3. Configure CloudWatch to send a notification to Auto Scaling Launch configuration when the CPU utilization is less than 10% and configure the Auto Scaling policy to remove the instance
    4. Configure CloudWatch to send a notification to the Auto Scaling group when the CPU Utilization is less than 10% and configure the Auto Scaling policy to remove the instance

References

Auto_Scaling_Options