AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Learning Path

AWS DevOps - Professional DOP-C02 Certificate

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Learning Path

  • AWS Certified DevOps Engineer – Professional (DOP-C02) exam is the upgraded pattern of the DevOps Engineer – Professional (DOP-C01) exam which was released in March 2023.
  • I recently attempted the latest pattern and DOP-C02 is quite similar to DOP-C01 with the inclusion of new services and features.

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Content

  • AWS Certified DevOps Engineer – Professional (DOP-C02) exam is intended for individuals who perform a DevOps engineer role and focuses on provisioning, operating, and managing distributed systems and services on AWS.
  • DOP-C02 basically validates
    • Implement and manage continuous delivery systems and methodologies on AWS
    • Implement and automate security controls, governance processes, and compliance validation
    • Define and deploy monitoring, metrics, and logging systems on AWS
    • Implement systems that are highly available, scalable, and self-healing on the AWS platform
    • Design, manage, and maintain tools to automate operational processes

Refer to AWS Certified DevOps Engineer – Professional Exam Guide

AWS DevOps - Professional DOP-C02 Exam Domains

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Resources

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Summary

  • Professional exams are tough, lengthy, and tiresome. Most of the questions and answers options have a lot of prose and a lot of reading that needs to be done, so be sure you are prepared and manage your time well.
  • Each solution involves multiple AWS services.
  • DOP-C02 exam has 75 questions to be solved in 170 minutes. Only 65 affect your score, while 10 unscored questions are for evaluation for future use.
  • DOP-C02 exam includes two types of questions, multiple-choice and multiple-response.
  • DOP-C02 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 750.
  • Each question mainly touches multiple AWS services.
  • Professional exams currently cost $ 300 + tax.
  • You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
  • As always, mark the questions for review and move on and come back to them after you are done with all.
  • As always, having a rough architecture or mental picture of the setup helps focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.
  • AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
  • Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Topics

  • AWS Certified DevOps Engineer – Professional exam covers a lot of concepts and services related to Automation, Deployments, Disaster Recovery, HA, Monitoring, Logging, and Troubleshooting. It also covers security and compliance related topics.

Management & Governance tools

  • CloudFormation
    • provides an easy way to create and manage a collection of related AWS resources, provision and update them in an orderly and predictable fashion.
    • Make sure you have gone through and executed a CloudFormation template to provision AWS resources.
    • CloudFormation Concepts cover
      • Templates act as a blueprint for provisioning of AWS resources
      • Stacks are collection of resources as a single unit, that can be created, updated, and deleted by creating, updating, and deleting stacks.
      • Change Sets present a summary or preview of the proposed changes that CloudFormation will make when a stack is updated.
      • Nested stacks are stacks created as part of other stacks.
    • CloudFormation template anatomy consists of resources, parameters, outputs, and mappings.
    • CloudFormation supports multiple features
      • Drift detection enables you to detect whether a stack’s actual configuration differs, or has drifted, from its expected configuration.
      • Termination protection helps prevent a stack from being accidentally deleted.
      • Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update.
      • StackSets help create, update, or delete stacks across multiple accounts and Regions with a single operation.
      • Helper scripts with creation policies can help wait for the completion of events before provisioning or marking resources complete.
      • Update policy supports rolling and replacing updates with AutoScaling.
      • Deletion policies to help retain or backup resources during stack deletion.
      • Custom resources can be configured for uses cases not supported for e.g. retrieve AMI IDs or interact with external services
    • Understand CloudFormation Best Practices esp. Nested Stacks and logical grouping
  • Elastic Beanstalk
    • helps to quickly deploy and manage applications in the AWS Cloud without having to worry about the infrastructure that runs those applications. 
    • Understand Elastic Beanstalk overall – Applications, Versions, and Environments
    • Deployment strategies with their advantages and disadvantages
  • OpsWorks
    • is a configuration management service that helps to configure and operate applications in a cloud enterprise by using Chef.
    • Understand OpsWorks overall – stacks, layers, recipes
    • Understand OpsWorks Lifecycle events esp. the Configure event and how it can be used.
    • Understand OpsWorks Deployment Strategies
    • Know OpsWorks auto-healing and how to be notified for it.
  • Understand CloudFormation vs Elastic Beanstalk vs OpsWorks
  • AWS Organizations
  • Systems Manager
    • AWS Systems Manager and its various services like parameter store, patch manager
    • Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management. Does not support secrets rotation. Use Secrets Manager instead
    • Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
    • Patch Manager helps automate the process of patching managed instances with both security-related and other types of updates.
  • CloudWatch
    • supports monitoring, logging, and alerting.
    • CloudWatch logs can be used to monitor, store, and access log files from EC2 instances, CloudTrail, Route 53, and other sources. You can create metric filters over the logs.
    • CloudWatch Subscription Filters can be used to send logs to Kinesis Data Streams, Lambda, or Kinesis Data Firehose.
    • EventBridge (CloudWatch Events) is a serverless event bus service that makes it easy to connect applications with data from a variety of sources.
    • EventBridge or CloudWatch events can be used as a trigger for periodically scheduled events.
    • CloudWatch unified agent helps collect metrics and logs from EC2 instances and on-premises servers and push them to CloudWatch.
    • CloudWatch Synthetics helps create canaries, configurable scripts that run on a schedule, to monitor your endpoints and APIs
  • CloudTrail
    • for audit and governance
    • With Organizations, the trail can be configured to log CloudTrail from all accounts to a central account.
  • Config is a fully managed service that provides AWS resource inventory, configuration history, and configuration change notifications to enable security, compliance, and governance.
    • supports managed as well as custom rules that can be evaluated on periodic basis or as the event occurs for compliance and trigger automatic remediation
    • Conformance pack is a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an account and a Region or across an organization in AWS Organizations.
  • Control Tower
    • to setup, govern, and secure a multi-account environment
    • strongly recommended guardrails cover EBS encryption
  • Service Catalog
    • allows organizations to create and manage catalogues of IT services that are approved for use on AWS with minimal permissions.
  • Trusted Advisor
    • helps with cost optimization and service limits in addition to security, performance, and fault tolerance.
  • AWS Health Dashboard is the single place to learn about the availability and operations of AWS services.

Developer Tools

  • Know AWS Developer tools
  • CodeCommit is a secure, scalable, fully-managed source control service that helps to host secure and highly scalable private Git repositories.
    • can help handle deployments of code to different environments using same repository and different branches.
  • CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy.
  • CodeDeploy helps automate code deployments to any instance, including EC2 instances and instances running on-premises, Lambda, and ECS.
  • CodePipeline is a fully managed continuous delivery service that helps automate the release pipelines for fast and reliable application and infrastructure updates.
    • CodePipeline pipeline structure (Hint : run builds parallelly using runorder)
    • Understand how to configure notifications on events and failures
    • CodePipeline supports Manual Approval
  • CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store, publish, and share software packages used in their software development process.
  • CodeGuru provides intelligent recommendations to improve code quality and identify an application’s most expensive lines of code. Reviewer helps improve code quality and Profiler helps optimize performance for applications
  • EC2 Image Builder helps to automate the creation, management, and deployment of customized, secure, and up-to-date server images that are pre-installed and pre-configured with software and settings to meet specific IT standards.

Disaster Recovery

  • Disaster recovery is mainly covered as a part of Re-silent cloud solutions.
  • Disaster Recovery whitepaper, although outdated, make sure you understand the differences and implementation for each type esp. pilot light, warm standby w.r.t RTO, and RPO.
  • Compute
    • Make components available in an alternate region,
    • Backup and Restore using either snapshots or AMIs that can be restored.
    • Use minimal low-scale capacity running which can be scaled once the failover happens
    • Use fully running compute in active-active confirmation with health checks.
    • CloudFormation to create, and scale infra as needed
  • Storage
    • S3 and EFS support cross-region replication
    • DynamoDB supports Global tables for multi-master, active-active inter-region storage needs.
    • Aurora Global Database provides cross-region read replicas and failover capabilities.
    • RDS supports cross-region read replicas which can be promoted to master in case of a disaster. This can be done using Route 53, CloudWatch, and lambda functions.
  • Network
    • Route 53 failover routing with health checks to failover across regions.
    • CloudFront Origin Groups support primary and secondary endpoints with failover.

Networking & Content Delivery

  • Networking is covered very lightly.
  • VPC – Virtual Private Cloud
    • Security Groups, NACLs
      • NACLs are stateless and need to open ephemeral ports for response traffic.
    • VPC Gateway Endpoints to provide access to S3 and DynamoDB
    • VPC Interface Endpoints or PrivateLink provide access to a variety of services like SQS, Kinesis, or Private APIs exposed through NLB.
    • VPC Peering to enable communication between VPCs within the same or different regions.
    • VPC Peering does not support overlapping CIDRs while PrivateLink does as only the endpoint is exposed.
    • VPC Flow Logs to track network traffic and can be published to CloudWatch Logs, S3, or Kinesis Data Firehose.
    • NAT Gateway provides managed NAT service that provides better availability, higher bandwidth, and requires less administrative effort.
  • Route 53
    • Routing Policies
      • focus on Weighted, Latency, and failover routing policies
      • failover routing provides active-passive configuration for disaster recovery while the others are active-active configurations.
  • CloudFront
    • fully managed, fast CDN service that speeds up the distribution of static, dynamic web or streaming content to end-users.
  • Load Balancer – ELB, ALB and NLB
    • ELB with Auto Scaling to provide scalable and highly available applications
    • Understand ALB vs NLB and their use cases.
    • Access logs needs to be enabled and logs only to S3.
  • Direct Connect & VPN
    • provide on-premises to AWS connectivity
    • Understand Direct Connect vs VPN
    • VPN can provide a cost-effective, quick failover for Direct Connect.
    • VPN over Direct Connect provides a secure dedicated connection and requires a public virtual interface.

Security, Identity & Compliance

  • AWS Identity and Access Management
  • AWS WAF
    • protects from common attack techniques like SQL injection and XSS, Conditions based include IP addresses, HTTP headers, HTTP body, and URI strings.
    • integrates with CloudFront, ALB, and API Gateway.
  • AWS KMS – Key Management Service
    • managed encryption service that allows the creation and control of encryption keys to enable data encryption.
  • Secrets Manager
    • helps protect secrets needed to access applications, services, and IT resources.
  • AWS GuardDuty
    • is a threat detection service that continuously monitors the AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.
  • AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts and enables automated remediation.
  • Firewall Manager helps centrally configure and manage firewall rules across the accounts and applications in AWS Organizations which includes a variety of protections, including WAF, Shield Advanced, VPC security groups, Network Firewall, and Route 53 Resolver DNS Firewall.

Storage

Database

Compute

  • EC2
  • Auto Scaling provides the ability to ensure a correct number of EC2 instances are always running to handle the load of the application
    • Auto Scaling Lifecycle events enable performing custom actions by pausing instances as an ASG launches or terminates them.
    • Blue/green deployments with Auto Scaling – With new launch configurations, new auto-scaling groups, or CloudFormation update policies.
  • Lambda
    • offers Serverless computing 
    • helps define reserved concurrency limits to reduce the impact
    • Lambda Alias now supports canary deployments
    • Reserved Concurrency guarantees the maximum number of concurrent instances for the function
    • Provisioned Concurrency
      • provides greater control over the performance of serverless applications and helps keep functions initialized and hyper-ready to respond in double-digit milliseconds.
      • supports Application Auto Scaling.
  • Step Functions helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines.
  • ECS – Elastic Container Service
    • container management service that supports Docker containers
    • supports two launch types
      • EC2 and
      • Fargate which provides the serverless capability
  • ECR provides a fully managed, secure, scalable, reliable container image registry service. It supports lifecycle policies for images.

Integration Tools

  • SQS in terms of loose coupling and scaling.
    • Difference between SQS Standard and FIFO esp. with throughput and order
    • SQS supports dead letter queues and redrive policy which specifies the source queue, the dead-letter queue, and the conditions under which SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.
  • CloudWatch integration with SNS and Lambda for notifications.

Analytics

Whitepapers

AWS Certified DevOps Engineer – Professional (DOP-C02) Exam Day

  • Make sure you are relaxed and get some good night’s sleep. The exam is not tough if you are well-prepared.
  • If you are taking the AWS Online exam
    • Try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
    • The online verification process does take some time and usually, there are glitches.
    • Remember, you would not be allowed to take the take if you are late by more than 30 minutes.
    • Make sure you have your desk clear, no hand-watches, or external monitors, keep your phones away, and nobody can enter the room.

Finally, All the Best 🙂

  •  

Kubernetes Resources

Kubernetes Resources

Kubernetes Resources

Namespaces

  • Namespaces provide a mechanism for isolating groups of resources within a single cluster.
  • Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc).
  • Names of resources need to be unique within a namespace, but not across namespaces.
  • Kubernetes starts with four initial namespaces:
    • default – default namespace for objects with no other namespace.
    • kube-system – namespace for objects created by the Kubernetes system.
    • kube-public – namespace is created automatically and is readable by all users (including those not authenticated).
    • kube-node-lease – namespace holds Lease objects associated with each node. Node leases allow the kubelet to send heartbeats so that the control plane can detect node failure.
  • Resource Quotas can be defined for each namespace to limit the resources consumed.
  • Resources within the namespaces can refer to each other with their service names.
  • Resources across namespace can be reached using the full DNS <<service_name>>.<<namespace_name>>.svc.cluster.local

Practice Namespace Exercises

Pods

  • A Kubernetes pod is a group of containers and is the smallest unit that Kubernetes administers.
  • Pods have a single IP address applied to every container within the pod.
  • Pods are always co-located and co-scheduled and run in a shared context.
  • Containers in a pod share the same resources such as memory and storage.
  • Shared context allows the individual Linux containers inside a pod to be treated collectively as a single application as if all the containerized processes were running together on the same host in more traditional workloads.

Practice Pod Exercises

ReplicaSet

  • ReplicaSet ensures to maintain a stable set of replica Pods running at any given time. It helps guarantee the availability of a specified number of identical Pods.
  • ReplicaSet includes the pod definition template, a selector to match the pods, and a number of replicas.
  • ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired replica number using the Pod template.
  • It is recommended to use Deployments instead of directly using ReplicaSets, as they help manage ReplicaSets and provide declarative updates to Pods.

Practice ReplicaSet Exercises

Deployment

  • Deployment provides declarative updates for Pods and ReplicaSets.
  • Deployments describe the number of desired identical pod replicas to run and the preferred update strategy used when updating the deployment.
  • A Deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive.
  • Deployments represent a set of multiple, identical Pods with no unique identities.
  • Deployments are well-suited for stateless applications that use ReadOnlyMany or ReadWriteMany volumes mounted on multiple replicas but are not well-suited for workloads that use ReadWriteOnce volumes. Use StatefulSets instead.

Deploy Container Resources

Practice Deployment Exercises

Services

  • Service is an abstraction over the pods, and essentially, the only interface the various application consumers interact with.
  • The lifetime of an individual pod cannot be relied upon; everything from their IP addresses to their very existence is prone to change.
  • Kubernetes doesn’t treat its pods as unique, long-running instances; if a pod encounters an issue and dies, it’s Kubernetes’ job to replace it so that the application doesn’t experience any downtime.
  • As pods are replaced, their internal names and IPs might change.
  • A service exposes a single machine name or IP address mapped to pods whose underlying names and numbers are unreliable.
  • A service ensures that, to the outside network, everything appears to be unchanged.

Practice Services Exercises

Ingress

Ingress

  • Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
  • Traffic routing is controlled by rules defined on the Ingress resource.
  • An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL/TLS and offer name-based virtual hosting
  • An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
  • An Ingress with no rules sends all traffic to a single default backend.

Practice Ingress Exercises

DaemonSet

  • A DaemonSet ensures that all (or some) Nodes run a copy of a Pod.
  • DaemonSet ensures pods are added to the newly created nodes and garbage collected as nodes are removed.
  • Some typical uses of a DaemonSet are:
    • running a cluster storage daemon on every node
    • running a logs collection daemon on every node
    • running a node monitoring daemon on every node

Refer DaemonSet Exercises

StatefulSet

StatefulSet Architecture

  • StatefulSet is ideal for stateful applications using ReadWriteOnce volumes.
  • StatefulSets are designed to deploy stateful applications and clustered applications that save data to persistent storage, such as persistent disks.
  • StatefulSets represent a set of Pods with unique, persistent identities and stable hostnames that Kubernetes maintains regardless of where they are scheduled.
  • State information and other resilient data for any given StatefulSet Pod are maintained in persistent disk storage associated with the StatefulSet.
  • StatefulSets use an ordinal index for the identity and ordering of their Pods. By default, StatefulSet Pods are deployed in sequential order and are terminated in reverse ordinal order.
  • StatefulSets are suitable for deploying Kafka, MySQL, Redis, ZooKeeper, and other applications needing unique, persistent identities and stable hostnames.

ConfigMaps

  • ConfigMap helps to store non-confidential data in key-value pairs.
  • Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
  • ConfigMap helps decouple environment-specific configuration from the container images so that the applications are easily portable.
  • ConfigMap does not provide secrecy or encryption. If the data you want to store are confidential, use a Secret rather than a ConfigMap, or use additional (third party) tools to keep your data private.
  • A ConfigMap is not designed to hold large chunks of data and cannot exceed 1 MiB.
  • ConfigMap can be configured on a container inside a Pod as
    • Inside a container command and args
    • Environment variables for a container
    • Add a file in read-only volume, for the application to read
    • Write code to run inside the Pod that uses the Kubernetes API to read a ConfigMap
  • ConfigMap can be configured to be immutable as it helps
    • protect from accidental (or unwanted) updates that could cause applications outages
    • improve performance of the cluster by significantly reducing the load on kube-apiserver , by closing watches for ConfigMaps marked as immutable.
  • Once a ConfigMap is marked as immutable, it is not possible to revert this change nor to mutate the contents of the data or the binaryData field. The ConfigMap needs to be deleted and recreated.

Practice ConfigMaps Exercises

Secrets

  • Secret provides a container for sensitive data such as a password without putting the information in a Pod specification or in a container image.
  • Secrets are similar to ConfigMaps but are specifically intended to hold confidential data.
  • Secrets are not really encrypted but only base64 encoded.
  • Secrets are, by default, stored unencrypted in the API server’s underlying data store (etcd). Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd. Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment.
  • To safeguard secrets, take at least the following steps:
    • Enable Encryption at Rest for Secrets.
    • Enable or configure RBAC rules that restrict reading data in Secrets.

Practice Secrets Exercises

Jobs & Cron Jobs

  • Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate.
  • As pods successfully complete, the Job tracks the successful completions.
  • When a specified number of successful completions is reached, the task (ie, Job) is complete.
  • Deleting a Job will clean up the Pods it created. Suspending a Job will delete its active Pods until the Job is resumed again.
  • A job can run multiple Pods in parallel using Parallelism field.
  • A CronJob creates Jobs on a repeating schedule.

Practice Jobs Exercises

Volumes

Kubernetes Volumes

  • Container on-disk files are ephemeral and lost if the container crashes.
  • Kubernetes supports Persistent volumes that exist beyond the lifetime of a pod. When a pod ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes.
  • Persistent Volumes is supported using API resources
    • PersistentVolume (PV)
      • is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
      • is a cluster-level resource and not bound to a namespace
      • are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV.
    • PersistentVolumeClaim (PVC)
      • is a request for storage by a user.
      • is similar to a Pod.
      • Pods consume node resources and PVCs consume PV resources.
      • Pods can request specific levels of resources (CPU and Memory).
      • Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany, or ReadWriteMany, see AccessModes).
  • Persistent Volumes can be provisioned
    • Statically – where the cluster administrator creates the PVs which is available for use by cluster users
    • Dynamically using StorageClasses where the cluster may try to dynamically provision a volume especially for the PVC.

Practice Volumes Exercises

Labels & Annotations

  • Labels and Annotations attach metadata to objects in Kubernetes.
  • Labels
    • are key/value pairs that can be attached to Kubernetes objects such as Pods and ReplicaSets.
    • can be arbitrary and are useful for attaching identifying information to Kubernetes objects.
    • provide the foundation for grouping objects and can be used to organize and to select subsets of objects.
    • are used in conjunction with selectors to identify groups of related resources.
  • Annotations
    • provide a storage mechanism that resembles labels
    • are key/value pairs designed to hold non-identifying information that can be leveraged by tools and libraries.

Practice Labels & Annotations Exercises

Nodes

  • A Kubernetes node manages and runs pods; it’s the machine (whether virtualized or physical) that performs the given work.
  • Just as pods collect individual containers that operate together, a node collects entire pods that function together.
  • When you’re operating at scale, you want to be able to hand work over to a node whose pods are free to take it.

Practice Nodes Exercises

AWS S3 Storage Classes

S3 Storage Classes Performance

AWS S3 Storage Classes

  • AWS S3 offers a range of S3 Storage Classes to match the use case scenario and performance access requirements.
  • S3 storage classes are designed to sustain the concurrent loss of data in one or two facilities.
  • S3 storage classes allow lifecycle management for automatic transition of objects for cost savings.
  • All S3 storage classes provide the same durability, first-byte latency, and support SSL encryption of data in transit, and data encryption at rest.
  • S3 also regularly verifies the integrity of the data using checksums and provides the auto-healing capability.

S3 Storage Classes Comparison

S3 Storage Classes Performance

S3 Standard

  • STANDARD is the default storage class, if none specified during upload
  • Low latency and high throughput performance
  • Designed for 99.999999999% i.e. 11 9’s Durability of objects across AZs
  • Designed for 99.99% availability over a given year
  • Resilient against events that impact an entire Availability Zone and is designed to sustain the loss of data in a two facilities
  • Ideal for performance-sensitive use cases and frequently accessed data
  • S3 Standard is appropriate for a wide variety of use cases, including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics.

S3 Intelligent Tiering (S3 Intelligent-Tiering)

  • S3 Intelligent Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead.
  • Delivers automatic cost savings by moving data on a granular object-level between two access tiers
    • one tier that is optimized for frequent access and
    • another lower-cost tier that is optimized for infrequently accessed data.
  • a frequent access tier and a lower-cost infrequent access tier, when access patterns change.
  • Ideal to optimize storage costs automatically for long-lived data when access patterns are unknown or unpredictable.
  • For a small monthly monitoring and automation fee per object, S3 monitors access patterns of the objects and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier.
  • There are no separate retrieval fees when using the Intelligent Tiering storage class. If an object in the infrequent access tier is accessed, it is automatically moved back to the frequent access tier.
  • No additional fees apply when objects are moved between access tiers
  • Suitable for objects greater than 128 KB (smaller objects are charged for 128 KB only) kept for at least 30 days (charged for a minimum of 30 days)
  • Same low latency and high throughput performance of S3 Standard
  • Designed for 99.999999999% i.e. 11 9’s Durability of objects across AZs
  • Designed for 99.9% availability over a given year

S3 Standard-Infrequent Access (S3 Standard-IA)

  • S3 Standard-Infrequent Access storage class is optimized for long-lived and less frequently accessed data. for e.g. for  backups and older data where access is limited, but the use case still demands high performance
  • Ideal for use for the primary or only copy of data that can’t be recreated.
  • Data stored redundantly across multiple geographically separated AZs and are resilient to the loss of an Availability Zone.
  • offers greater availability and resiliency than the ONEZONE_IA class.
  • Objects are available for real-time access.
  • Suitable for larger objects greater than 128 KB (smaller objects are charged for 128 KB only) kept for at least 30 days (charged for minimum 30 days)
  • Same low latency and high throughput performance of Standard
  • Designed for 99.999999999% i.e. 11 9’s Durability of objects across AZs
  • Designed for 99.9% availability over a given year
  • S3 charges a retrieval fee for these objects, so they are most suitable for infrequently accessed data.

S3 One Zone-Infrequent Access (S3 One Zone-IA)

  • S3 One Zone-Infrequent Access storage classes are designed for long-lived and infrequently accessed data, but available for millisecond access (similar to the STANDARD and STANDARD_IA storage class).
  • Ideal when the data can be recreated if the AZ fails, and for object replicas when setting cross-region replication (CRR).
  • Objects are available for real-time access.
  • Suitable for objects greater than 128 KB (smaller objects are charged for 128 KB only) kept for at least 30 days (charged for a minimum of 30 days)
  • Stores the object data in only one AZ, which makes it less expensive than Standard-Infrequent Access
  • Data is not resilient to the physical loss of the AZ resulting from disasters, such as earthquakes and floods.
  • One Zone-Infrequent Access storage class is as durable as Standard-Infrequent Access, but it is less available and less resilient.
  • Designed for 99.999999999% i.e. 11 9’s Durability of objects in a single AZ
  • Designed for 99.5% availability over a given year
  • S3 charges a retrieval fee for these objects, so they are most suitable for infrequently accessed data.

Reduced Redundancy Storage – RRS

  • NOTE – AWS recommends not to use this storage class. The STANDARD storage class is more cost-effective now.
  • Reduced Redundancy Storage (RRS) storage class is designed for non-critical, reproducible data stored at lower levels of redundancy than the STANDARD storage class, which reduces storage costs
  • Designed for durability of 99.99% of objects
  • Designed for 99.99% availability over a given year
  • Lower level of redundancy results in less durability and availability
  • RRS stores object on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive,
  • RRS does not replicate objects as many times as S3 standard storage and is designed to sustain the loss of data in a single facility.
  • If an RRS object is lost, S3 returns a 405 error on requests made to that object
  • S3 can send an event notification, configured on the bucket, to alert a user or start a workflow when it detects that an RRS object is lost which can be used to replace the lost object

S3 Glacier Instant Retrieval

  • Use for archiving data that is rarely accessed and requires milliseconds retrieval.
  • Storage class has a minimum storage duration period of 90 days
  • Designed for 99.999999999% i.e. 11 9’s Durability of objects across AZs
  • Designed for 99.99% availability

S3 Glacier Flexible Retrieval – S3 Glacier

  • S3 GLACIER storage class is suitable for low-cost data archiving where data access is infrequent and retrieval time of minutes to hours is acceptable.
  • Storage class has a minimum storage duration period of 90 days
  • Provides configurable retrieval times, from minutes to hours
    • Expedited retrieval: 1-5 mins
    • Standard retrieval: 3-5 hours
    • Bulk retrieval: 5-12 hours
  • GLACIER storage class uses the very low-cost Glacier storage service, but the objects in this storage class are still managed through S3
  • For accessing GLACIER objects,
    • the object must be restored which can take anywhere between minutes to hours
    • objects are only available for the time period (the number of days) specified during the restoration request
    • object’s storage class remains GLACIER
    • charges are levied for both the archive (GLACIER rate) and the copy restored temporarily
  • Vault Lock feature enforces compliance via a lockable policy.
  • Offers the same durability and resiliency as the STANDARD storage class
  • Designed for 99.999999999% i.e. 11 9’s Durability of objects across AZs
  • Designed for 99.99% availability

S3 Glacier Deep Archive

  • Glacier Deep Archive storage class provides the lowest-cost data archiving where data access is infrequent and retrieval time of hours is acceptable.
  • Has a minimum storage duration period of 180 days and can be accessed at a default retrieval time of 12 hours.
  • Supports long-term retention and digital preservation for data that may be accessed once or twice a year
  • Designed for 99.999999999% i.e. 11 9’s Durability of objects across AZs
  • Designed for 99.9% availability over a given year
  • DEEP_ARCHIVE retrieval costs can be reduced by using bulk retrieval, which returns data within 48 hours.
  • Ideal alternative to magnetic tape libraries

S3 Analytics – S3 Storage Classes Analysis

  • S3 Analytics – Storage Class Analysis helps analyze storage access patterns to decide when to transition the right data to the right storage class.
  • S3 Analytics feature observes data access patterns to help determine when to transition less frequently accessed STANDARD storage to the STANDARD_IA (IA, for infrequent access) storage class.
  • Storage Class Analysis can be configured to analyze all the objects in a bucket or filters to group objects.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What does RRS stand for when talking about S3?
    1. Redundancy Removal System
    2. Relational Rights Storage
    3. Regional Rights Standard
    4. Reduced Redundancy Storage
  2. What is the durability of S3 RRS?
    1. 99.99%
    2. 99.95%
    3. 99.995%
    4. 99.999999999%
  3. What is the Reduced Redundancy option in Amazon S3?
    1. Less redundancy for a lower cost
    2. It doesn’t exist in Amazon S3, but in Amazon EBS.
    3. It allows you to destroy any copy of your files outside a specific jurisdiction.
    4. It doesn’t exist at all
  4. An application is generating a log file every 5 minutes. The log file is not critical but may be required only for verification in case of some major issue. The file should be accessible over the internet whenever required. Which of the below mentioned options is a best possible storage solution for it?
    1. AWS S3
    2. AWS Glacier
    3. AWS RDS
    4. AWS S3 RRS (Reduced Redundancy Storage (RRS) is an Amazon S3 storage option that enables customers to store noncritical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage. RRS is designed to sustain the loss of data in a single facility.)
  5. A user has moved an object to Glacier using the life cycle rules. The user requests to restore the archive after 6 months. When the restore request is completed the user accesses that archive. Which of the below mentioned statements is not true in this condition?
    1. The archive will be available as an object for the duration specified by the user during the restoration request
    2. The restored object’s storage class will be RRS (After the object is restored the storage class still remains GLACIER. Read more)
    3. The user can modify the restoration period only by issuing a new restore request with the updated period
    4. The user needs to pay storage for both RRS (restored) and Glacier (Archive) Rates
  6. Your department creates regular analytics reports from your company’s log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data? [PROFESSIONAL]
    1. Use reduced redundancy storage (RRS) for PDF and CSV data in Amazon S3. Add Spot instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. (Spot instances impacts performance)
    2. Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability )
    3. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift (Spot instances impacts performance)
    4. Use reduced redundancy storage (RRS) for PDF and CSV data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift. (Spot instances impacts performance)
  7. Which of the below mentioned options can be a good use case for storing content in AWS RRS?
    1. Storing mission critical data Files
    2. Storing infrequently used log files
    3. Storing a video file which is not reproducible
    4. Storing image thumbnails
  8. A newspaper organization has an on-premises application which allows the public to search its back catalogue and retrieve individual newspaper pages via a website written in Java. They have scanned the old newspapers into JPEGs (approx. 17TB) and used Optical Character Recognition (OCR) to populate a commercial search product. The hosting platform and software is now end of life and the organization wants to migrate its archive to AWS and produce a cost efficient architecture and still be designed for availability and durability. Which is the most appropriate? [PROFESSIONAL]
    1. Use S3 with reduced redundancy to store and serve the scanned files, install the commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer. (RRS impacts durability and commercial search would add to cost)
    2. Model the environment using CloudFormation. Use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index. (Using EBS is not cost effective for storing files)
    3. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones. (Standard S3 and Elastic Beanstalk provides availability and durability, Standard S3 and CloudSearch provides cost effective storage and search)
    4. Use a single-AZ RDS MySQL instance to store the search index and the JPEG images use an EC2 instance to serve the website and translate user queries into SQL. (RDS is not ideal and cost effective to store files, Single AZ impacts availability)
    5. Use a CloudFront download distribution to serve the JPEGs to the end users and Install the current commercial search product, along with a Java Container for the website on EC2 instances and use Route53 with DNS round-robin. (CloudFront needs a source and using commercial search product is not cost effective)
  9. A research scientist is planning for the one-time launch of an Elastic MapReduce cluster and is encouraged by her manager to minimize the costs. The cluster is designed to ingest 200TB of genomics data with a total of 100 Amazon EC2 instances and is expected to run for around four hours. The resulting data set must be stored temporarily until archived into an Amazon RDS Oracle instance. Which option will help save the most money while meeting requirements? [PROFESSIONAL]
    1. Store ingest and output files in Amazon S3. Deploy on-demand for the master and core nodes and spot for the task nodes.
    2. Optimize by deploying a combination of on-demand, RI and spot-pricing models for the master, core and task nodes. Store ingest and output files in Amazon S3 with a lifecycle policy that archives them to Amazon Glacier. (Master and Core must be RI or On Demand. Cannot be Spot)
    3. Store the ingest files in Amazon S3 RRS and store the output files in S3. Deploy Reserved Instances for the master and core nodes and on-demand for the task nodes. (Need better durability for ingest file. Spot instances can be used for task nodes for cost saving.)
    4. Deploy on-demand master, core and task nodes and store ingest and output files in Amazon S3 RRS (Input must be in S3 standard)

AWS Compute Optimizer

AWS Compute Optimizer

  • AWS Compute Optimizer helps analyze the configuration and utilization metrics of the AWS resources.
  • reports whether the resources are optimal, and generates optimization recommendations to reduce the cost and improve the performance of the workloads.
  • delivers intuitive and easily actionable resource recommendations to help quickly identify optimal AWS resources for the workloads without requiring specialized expertise or investing substantial time and money.
  • provides a global, cross-account view of all resources
  • analyzes the specifications and the utilization metrics of the resources from CloudWatch for the last 14 days.
  • provides graphs showing recent utilization metric history data, as well as projected utilization for recommendations, which can be used to evaluate which recommendation provides the best price-performance trade-off.
  • Analysis and visualization of the usage patterns can help decide when to move or resize the running resources, and still meet your performance and capacity requirements.
  • generates recommendations for the following resources:

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company must assess the business’s EC2 instances and Elastic Block Store (EBS) volumes to determine how effectively the business is using resources. The company has not detected a pattern in how these EC2 instances are used by the apps that access the databases. Which option best fits these criteria in terms of cost-effectiveness?
    1. Use AWS Systems Manager OpsCenter.
    2. Use Amazon CloudWatch for detailed monitoring.
    3. Use AWS Compute Optimizer.
    4. Sign up for the AWS Enterprise Support plan. Turn on AWS Trusted Advisor.

References

AWS_Compute_Optimizer

AWS Auto Scaling Launch Template vs Launch Configuration

Auto Scaling Launch Template vs Launch Configuration

Auto Scaling Launch Template vs Launch Configuration

Auto Scaling Launch Template vs Launch Configuration

Launch Configuration

  • Launch configuration is an instance configuration template that an Auto Scaling Group uses to launch EC2 instances.
  • Launch configuration is similar to EC2 configuration and involves the selection of the Amazon Machine Image (AMI), block devices, key pair, instance type, security groups, user data, EC2 instance monitoring, instance profile, kernel, ramdisk, the instance tenancy, whether the instance has a public IP address, and is EBS-optimized.
  • Launch configuration can be associated with multiple ASGs
  • Launch configuration can’t be modified after creation and needs to be created new if any modification is required.
  • Basic or detailed monitoring for the instances in the ASG can be enabled when a launch configuration is created.
  • By default, basic monitoring is enabled when you create the launch configuration using the AWS Management Console, and detailed monitoring is enabled when you create the launch configuration using the AWS CLI or an API
  • AWS recommends using Launch Template instead.

Launch Template

  • A Launch Template is similar to a launch configuration, with additional features, and is recommended by AWS.
  • Launch Template allows multiple versions of a template to be defined.
  • With versioning, a subset of the full set of parameters can be created and then reused to create other templates or template versions for e.g, a default template that defines common configuration parameters can be created and allow the other parameters to be specified as part of another version of the same template.
  • Launch Template allows the selection of both Spot and On-Demand Instances or multiple instance types.
  • Launch templates support EC2 Dedicated Hosts. Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use.
  • Launch templates provide the following features
    • Support for multiple instance types and purchase options in a single ASG.
    • Launching Spot Instances with the capacity-optimized allocation strategy.
    • Support for launching instances into existing Capacity Reservations through an ASG.
    • Support for unlimited mode for burstable performance instances.
    • Support for Dedicated Hosts.
    • Combining CPU architectures such as Intel, AMD, and ARM (Graviton2)
    • Improved governance through IAM controls and versioning.
    • Automating instance deployment with Instance Refresh.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is launching a new workload. The workload will run on Amazon EC2 instances in an Amazon EC2 Auto Scaling group. The company needs to maintain different versions of the EC2 configurations. The company also needs the Auto Scaling group to automatically scale to maintain CPU utilization of 60%. How can a SysOps administrator meet these requirements?
    1. Configure the Auto Scaling group to use a launch configuration with a target tracking scaling policy.
    2. Configure the Auto Scaling group to use a launch configuration with a simple scaling policy.
    3. Configure the Auto Scaling group to use a launch template with a target tracking scaling policy.
    4. Configure the Auto Scaling group to use a launch template with a simple scaling policy.

References

AWS_Launch_Template

Breaking into Data Analytics: Tips and Strategies for Aspiring Data Analysts

Breaking into Data Analytics: Tips and Strategies for Aspiring Data Analysts

Data analytics is analyzing and interpreting data to draw meaningful insights and conclusions. In today’s data-driven world, data analytics has become crucial for businesses to make informed decisions and gain a competitive edge. It uses statistical and computational techniques to analyze large datasets, identify patterns, and make predictions.

Data analytics is essential because it enables organizations to identify trends, make accurate forecasts, and gain insights into customer behavior. Businesses can make data-driven decisions, improve efficiency, and increase profitability by leveraging data analytics.

Anyone interested in working with data can benefit from data analytics. Whether you’re a recent graduate, a mid-career professional, or an executive, data analytics skills can help you progress your career and achieve your goals.

What is Data Analytics and Why is it Important?

Data analytics is the approach of analyzing and interpreting data to extract meaningful insights and information. It involves using various techniques and tools to examine large datasets, identify patterns, and draw conclusions. It has become increasingly important in today’s business landscape, enabling organizations to make informed decisions based on data-driven insights.

Data analytics is crucial for businesses because it helps them to identify trends, make accurate forecasts, and gain insights into customer behavior. With the help of data analytics, organizations can improve their operations, optimize their resources, and increase profitability. It can also help businesses identify improvement areas, streamline their processes, and stay ahead of the competition.

Data analytics is a growing field with a high demand for skilled professionals. There are various career opportunities in data analytics, including data analyst, business analyst, data scientist, data engineer, and more. These roles require a mix of technical and soft skills, such as data analysis, programming, communication, problem-solving, and critical thinking.

Essential Skills and Knowledge for Aspiring Data Analysts

To become a successful data analyst, there are a variety of technical and non-technical skills that you need to possess. Technical skills include knowledge of programming languages, databases, data visualization tools, and statistical analysis. Non-technical skills include communication, problem-solving, and critical thinking.

It’s also important to have domain knowledge in the industry you’re working in. For example, it’s important to understand healthcare terminology and regulations if you’re analyzing data for a healthcare organization. This will enable you to ask the right questions and draw meaningful insights from the data.

Many resources are available for acquiring the necessary skills and knowledge for data analytics. Online courses, boot camps, and degree programs are all viable options. Additionally, many free resources are available, such as YouTube tutorials and open-source software.

Tips and Strategies for Breaking into Data Analytics

Breaking into the field of data analytics can be challenging, but with the proper strategies and mindset, you can achieve your goals. Here are some tips and techniques to help you break into data analytics:

1. Identify your career goals and paths

Before starting your journey in data analytics, you must identify your career goals and the path you want to take. Do you want to become a data analyst, data scientist, or data engineer? Understanding your goals will help you focus your efforts and choose the right resources and tools.

2. Build a strong foundation in statistics and programming

You must have a reliable statistics and programming foundation to succeed in data analytics. Familiarize yourself with programming languages like Python and R, and learn statistical analysis techniques like regression analysis and hypothesis testing.

3. Gain experience through internships and projects

Internships and projects are excellent ways to gain practical experience in data analytics. Seek internships in data-driven organizations and participate in data analytics projects on platforms like Kaggle.

4. Network and build professional relationships

Networking is essential in any field, and data analytics is no exception. Attend industry events, join online communities, and connect with other professionals in the field. Building relationships with others can lead to job opportunities and valuable insights.

5. Create a strong portfolio and resume

Your portfolio and resume should showcase your skills, knowledge, and experience in data analytics. Include projects you’ve worked on, data visualizations you’ve created, and any relevant coursework or certifications.

By following these tips and strategies, you can position yourself for success in the field of data analytics. You can break into this exciting and growing field with determination, hard work, and a willingness to learn.

Data Science and Data Analytics Courses for Aspiring Data Analysts

Taking data science and data analytics courses can be an excellent way to gain the necessary skills and knowledge to break into the field of data analytics. Here are some pivotal points to consider when exploring data science and data analytics courses:

Overview of data science and data analytics courses

Data science and data analytics courses provide training in statistical analysis, data visualization, programming, and other relevant topics. They can be taken online or in person and vary in length and depth.

Benefits of taking data science and data analytics courses

Data science and data analytics courses can provide a comprehensive education in the field, help you gain practical skills, and provide networking opportunities. They can also help demonstrate your dedication and expertise to potential employers.

Types of courses available for aspiring data analysts

Various types of data science and data analytics courses are available, including certificate programs, boot camps, online courses, and degree programs. Each has its own strengths and weaknesses and can be tailored to fit different skill levels. Two highly recommended programs are Great Learning’s Data Science Courses and Data Analytics Courses, which provide in-depth knowledge of concepts and hands-on experience in solving real-world problems.

Comparison of different courses available

Consider factors like cost, length, content, and instructor experience when choosing a course. Research reviews and ratings from previous students to get an idea of the quality of the course.

Recommended courses for different skill levels

For beginners, introductory courses in Python and statistics can be helpful. For intermediate learners, courses on machine learning, data visualization, and databases can be useful. Advanced learners may benefit from big data, data engineering, and data science research courses.

Wrapping Up

Data analytics is a rapidly evolving field and an incredibly rewarding career choice for those with the right skills and experience. With the right tips and strategies, aspiring data analysts can break into the field and position themselves for tremendous success. By understanding the essential skills and industry language, carefully planning their entry into the field, and leveraging contacts in the field, ambitious analysts can take the first steps in achieving their career goals and begin to make an impact within the data analytics industry.

AWS Certified Developer – Associate DVA-C02 Exam Learning Path

AWS Certified Developer - Associate Certification

AWS Certified Developer – Associate DVA-C02 Exam Learning Path

  • AWS Certified Developer – Associate DVA-C02 exam is the latest AWS exam released on 27th February 2023 and has replaced the previous AWS Developer – Associate DVA-C01 certification exam.
  • I passed the AWS Developer – Associate DVA-C02 exam with a score of 835/1000.

AWS Certified Developer – Associate DVA-C02 Exam Content

  • DVA-C02 validates a candidate’s ability to demonstrate proficiency in developing, testing, deploying, and debugging AWS cloud-based applications.
  • DVA-C02 also validates a candidate’s ability to complete the following tasks:
    • Develop and optimize applications on AWS
    • Package and deploy by using continuous integration and continuous delivery (CI/CD) workflows
    • Secure application code and data
    • Identify and resolve application issues

Refer AWS Certified Developer – Associate Exam Blue Print

AWS Certified Developer - Associate Domains

AWS Certified Developer – Associate DVA-C02 Summary

  • DVA-C02 exam consists of 65 questions in 130 minutes, and the time is more than sufficient if you are well-prepared.
  • DVA-C02 exam includes two types of questions, multiple-choice and multiple-response.
  • DVA-C02 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 720.
  • Associate exams currently cost $ 150 + tax.
  • You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
  • AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
  • Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.

AWS Certified Developer – Associate DVA-C02 Exam Resources

AWS Certified Developer – Associate DVA-C02 Exam Topics

  • AWS DVA-C02 exam concepts cover solutions that fall within AWS Well-Architected framework to cover scalable, highly available, cost-effective, performant, and resilient pillars.
  • AWS Certified Developer – Associate DVA-C02 exam covers a lot of the latest AWS services like Amplify, X-Ray while focusing majorly on other services like Lambda, DynamoDB, Elastic Beanstalk, S3, EC2
  • AWS Certified Developer – Associate DVA-C02 exam is similar to DVA-C01 with more focus on the hands-on development and deployment concepts rather than just the architectural concepts.
  • If you had been preparing for the DVA-C01, DVA-C02 is pretty much similar except for the addition of some new services covering Amplify, X-Ray, etc.

Compute

  • Elastic Cloud Compute – EC2
  • Auto Scaling and ELB
    • Auto Scaling provides the ability to ensure a correct number of EC2 instances are always running to handle the load of the application
    • Elastic Load Balancer allows the incoming traffic to be distributed automatically across multiple healthy EC2 instances
  • Autoscaling & ELB
    • work together to provide High Availability and Scalability.
    • Span both ELB and Auto Scaling across Multi-AZs to provide High Availability
    • Do not span across regions. Use Route 53 or Global Accelerator to route traffic across regions.
  • Lambda and serverless architecture, its features, and use cases.
    • Lambda integrated with API Gateway to provide a serverless, highly scalable, cost-effective architecture.
    • Lambda execution role needs the required permissions to integrate with other AWS services.
    • Environment variables to keep functions configurable.
    • Lambda Layers provide a convenient way to package libraries and other dependencies that you can use with your Lambda functions.
    • Function versions can be used to manage the deployment of the functions.
    • Function Alias supports creating aliases, which are mutable, for each function version.
    • provides /tmp ephemeral scratch storage.
    • Integrates with X-Ray for distributed tracing.
    • Use RDS proxy for connection pooling.
  • Elastic Container Service – ECS with its ability to deploy containers and microservices architecture.
    • ECS role for tasks can be provided through taskRoleArn
    • ALB provides dynamic port mapping to allow multiple same tasks on the same node.
  • Elastic Kubernetes Service – EKS
    • managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers
    • ideal for migration of an existing workload on Kubernetes
  • Elastic Beanstalk
    • at a high level, what it provides, and its ability to get an application running quickly.
    • Deployment types with their advantages and disadvantages

Databases

Storage

  • Simple Storage Service – S3
    • S3 storage classes with lifecycle policies
      • Understand the difference between SA Standard vs SA IA vs SA IA One Zone in terms of cost and durability
    • S3 Data Protection
      • S3 Client-side encryption encrypts data before storing it in S3
      • S3 encryption in transit can be enforced with S3 bucket policies using secureTransport attributes.
      • S3 encryption at rest can be enforced with S3 bucket policies using x-amz-server-side-encryption attribute.
    • S3 features including
      • S3 provides cost-effective static website hosting. However, it does not support HTTPS endpoint. Can be integrated with CloudFront for HTTPS, caching, performance, and low-latency access.
      • S3 versioning provides protection against accidental overwrites and deletions. Used with MFA Delete feature.
      • S3 Pre-Signed URLs for both upload and download provide access without needing AWS credentials.
      • S3 CORS allows cross-domain calls
      • S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket.
      • S3 Event Notifications to trigger events on various S3 events like objects added or deleted. Supports SQS, SNS, and Lambda functions.
      • Integrates with Amazon Macie to detect PII data
      • Replication that supports the same and cross-region replication required versioning to be enabled.
      • Integrates with Athena to analyze data in S3 using standard SQL.
  • Instance Store
    •  is physically attached  to the EC2 instance and provides the lowest latency and highest IOPS
  • Elastic Block Storage – EBS
    • EBS volume types and their use cases in terms of IOPS and throughput. SSD for IOPS and HDD for throughput
  • Elastic File System – EFS
    • simple, fully managed, scalable, serverless, and cost-optimized file storage for use with AWS Cloud and on-premises resources.
    • provides shared volume across multiple EC2 instances, while EBS can be attached to a single instance within the same AZ or EBS Multi-Attach can be attached to multiple instances within the same AZ
    • can be mounted with Lambda functions
    • supports the NFS protocol, and is compatible with Linux-based AMIs
    • supports cross-region replication and storage classes for cost management.
  • Difference between EBS vs S3 vs EFS
  • Difference between EBS vs Instance Store
  • Would recommend referring Storage Options whitepaper, although a bit dated 90% still holds right

Security & Identity

  • Identity Access Management – IAM
    • IAM role
      • provides permissions that are not associated with a particular user, group, or service and are intended to be assumable by anyone who needs it.
      • can be used for EC2 application access and Cross-account access
    • IAM Best Practices
  • Cognito
    • provides authentication, authorization, and user management for the web and mobile apps.
    • User pools are user directories that provide sign-up and sign-in options for the app users.
    • Identity pools enable you to grant the users access to other AWS services.
  • Key Management Services – KMS encryption service
    • for key management and envelope encryption
    • provides encryption at rest and does not handle encryption in transit.
  • Amazon Certificate Manager – ACM
    • helps easily provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services and internally connected resources.
  • AWS Secrets Manager
    • helps protect secrets needed to access applications, services, and IT resources.
    • supports automatic rotations of secrets
  • Secrets Manager vs Systems Manager Parameter Store for secrets management
    • Secrets Manager supports automatic credentials rotation and is integrated with Lambda and other services like RDS, and DynamoDB.
    • Systems Manager Parameter Store provides free standard parameters and is cost-effective as compared to Secrets Manager.

Front-end Web and Mobile

  • API Gateway
    • is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale.
    • Powerful, flexible authentication mechanisms, such as AWS IAM policies, Lambda authorizer functions, and Amazon Cognito user pools.
    • supports Canary release deployments for safely rolling out changes.
    • define usage plans to meter, restrict third-party developer access, configure throttling, and quota limits on a per API key basis
    • integrates with AWS X-Ray for understanding and triaging performance latencies.
    • API Gateway CORS allows cross-domain calls
  • Amplify
    • is a complete solution that lets frontend web and mobile developers easily build, ship, and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as use cases evolve.

Management Tools

  • CloudWatch
    • monitoring to provide operational transparency
    • is extendable with custom metrics
    • does not capture memory metrics, by default, and can be done using the CloudWatch agent.
  • EventBridge
    • is a serverless event bus service that makes it easy to connect applications with data from a variety of sources.
    • enables building loosely coupled and distributed event-driven architectures.
  • CloudTrail
    • helps enable governance, compliance, and operational and risk auditing of the AWS account.
    • helps to get a history of AWS API calls and related events for the AWS account.
  • CloudFormation
    • easy way to create and manage a collection of related AWS resources, and provision and update them in an orderly and predictable fashion.
    • Supports Serverless Application Model – SAM for the deployment of serverless applications including Lambda.
    • CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and Regions with a single operation.

Integration Tools

  • Simple Queue Service
    • as message queuing service and SNS as pub/sub notification service
    • as a decoupling service and provide resiliency
    • SQS features like visibility, and long poll vs short poll
    • provide scaling for the Auto Scaling group based on the SQS size.
    • SQS Standard vs SQS FIFO difference
      • FIFO provides exactly-once delivery but with low throughput
  • Simple Notification Service – SNS
    • is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients
    • Fanout pattern can be used to push messages to multiple subscribers.
  • Understand SQS as a message queuing service and SNS as a pub/sub notification service.
  • Know AWS Developer tools
    • CodeCommit is a secure, scalable, fully-managed source control service that helps to host secure and highly scalable private Git repositories.
    • CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy.
    • CodeDeploy helps automate code deployments to any instance, including EC2 instances and instances running on-premises.
    • CodePipeline is a fully managed continuous delivery service that helps automate the release pipelines for fast and reliable application and infrastructure updates.
    • CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store, publish, and share software packages used in their software development process.
  • X-Ray
    • helps developers analyze and debug production, distributed applications for e.g. built using a microservices lambda architecture

Analytics

  • Redshift as a business intelligence tool
  • Kinesis
    • for real-time data capture and analytics.
    • Integrates with Lambda functions to perform transformations
  • AWS Glue
    • fully-managed, ETL service that automates the time-consuming steps of data preparation for analytics

Networking

  • Does not cover much networking or designing networks, but be sure you understand VPC, Subnets, Routes, Security Groups, etc.

AWS Cloud Computing Whitepapers

On the Exam Day

  • Make sure you are relaxed and get some good night’s sleep. The exam is not tough if you are well-prepared.
  • If you are taking the AWS Online exam
    • Try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
    • The online verification process does take some time and usually, there are glitches.
    • Remember, you would not be allowed to take the take if you are late by more than 30 minutes.
    • Make sure you have your desk clear, no hand-watches, or external monitors, keep your phones away, and nobody can enter the room.

Finally, All the Best 🙂

AWS Auto Scaling Policies

AWS Auto Scaling Policies

Maintain a Steady Count of Instances

  • Auto Scaling ensures a steady minimum (or desired if specified) count of Instances will always be running.
  • If an instance is found unhealthy, Auto Scaling will terminate the Instance and launch a new one.
  • ASG determines the health state of each instance by periodically checking the results of EC2 instance status checks.
  • ASG can be associated with an Elastic load balancer enabled to use the Elastic Load Balancing health check, Auto Scaling determines the health status of the instances by checking the results of both EC2 instance status and Elastic Load Balancing instance health.
  • Auto Scaling marks an instance unhealthy and launches a replacement if
    • the instance is in a state other than running,
    • the system status is impaired, or
    • Elastic Load Balancing reports the instance state as OutOfService.
  • After an instance has been marked unhealthy as a result of an EC2 or ELB health check, it is almost immediately scheduled for replacement. It never automatically recovers its health.
  • For an unhealthy instance, the instance’s health check can be changed back to healthy manually but you will encounter an error if the instance is already terminating.
  • Because the interval between marking an instance unhealthy and its actual termination is so small, attempting to set an instance’s health status back to healthy is probably useful only for a suspended group.
  • When the instance is terminated, any associated Elastic IP addresses are disassociated and are not automatically associated with the new instance.
  • Elastic IP addresses must be associated with the new instance manually.
  • Similarly, when the instance is terminated, its attached EBS volumes are detached and must be attached to the new instance manually.

Manual Scaling

  • Manual scaling can be performed by
    • Changing the desired capacity limit of the ASG
    • Attaching/Detaching instances to the ASG
  • Attaching/Detaching an EC2 instance can be done only if
    • Instance is in the running state.
    • AMI used to launch the instance must still exist.
    • Instance is not a member of another ASG.
    • Instance is in the same Availability Zone as the ASG.
    • If the ASG is associated with a load balancer, the instance and the load balancer must both be in the same VPC.
  • Auto Scaling increases the desired capacity of the group by the number of instances being attached. But if the number of instances being attached plus the desired capacity exceeds the maximum size, the request fails.
  • When Detaching instances, an option to decrement the desired capacity for the ASG by the number of instances being detached is provided. If chosen not to decrement the capacity, Auto Scaling launches new instances to replace the ones that you detached.
  • If an instance is detached from an ASG that is also registered with a load balancer, the instance is deregistered from the load balancer. If connection draining is enabled for the load balancer, Auto Scaling waits for the in-flight requests to complete.

Scheduled Scaling

  • Scaling based on a schedule allows you to scale the application in response to predictable load changes for e.g. last day of the month, the last day of a financial year.
  • Scheduled scaling requires the configuration of Scheduled actions, which tells Auto Scaling to perform a scaling action at a certain time in the future, with the start time at which the scaling action should take effect, and the new minimum, maximum, and desired size of group should have.
  • Auto Scaling guarantees the order of execution for scheduled actions within the same group, but not for scheduled actions across groups.
  • Multiple Scheduled Actions can be specified but should have unique time values and they cannot have overlapping times scheduled which will lead to their rejection.
  • Cooldown periods are not supported.

Dynamic Scaling

  • Allows automatic scaling in response to the changing demand for e.g. scale-out in case CPU utilization of the instance goes above 70% and scale in when the CPU utilization goes below 30%
  • ASG uses a combination of alarms & policies to determine when the conditions for scaling are met.
    • An alarm is an object that watches over a single metric over a specified time period. When the value of the metric breaches the defined threshold, for the number of specified time periods the alarm performs one or more actions (such as sending messages to Auto Scaling).
    • A policy is a set of instructions that tells Auto Scaling how to respond to alarm messages.
  • Dynamic scaling process works as below
    1. CloudWatch monitors the specified metrics for all the instances in the Auto Scaling Group.
    2. Changes are reflected in the metrics as the demand grows or shrinks
    3. When the change in the metrics breaches the threshold of the CloudWatch alarm, the CloudWatch alarm performs an action. Depending on the breach, the action is a message sent to either the scale-in policy or the scale-out policy
    4. After the Auto Scaling policy receives the message, Auto Scaling performs the scaling activity for the ASG.
    5. This process continues until you delete either the scaling policies or the ASG.
  • When a scaling policy is executed, if the capacity calculation produces a number outside of the minimum and maximum size range of the group, EC2 Auto Scaling ensures that the new capacity never goes outside of the minimum and maximum size limits.
  • When the desired capacity reaches the maximum size limit, scaling out stops. If demand drops and capacity decreases, Auto Scaling can scale out again.

Dynamic Scaling Policy Types

Target tracking scaling

  • Increase or decrease the current capacity of the group based on a target value for a specific metric.

Auto Scaling Target Tracking Scaling

Step scaling

  • Increase or decrease the current capacity of the group based on a set of scaling adjustments, known as step adjustments, that vary based on the size of the alarm breach.

Simple scaling

  • Increase or decrease the current capacity of the group based on a single scaling adjustment.

Multiple Policies

  • ASG can have more than one scaling policy attached at any given time.
  • Each ASG would have at least two policies: one to scale the architecture out and another to scale the architecture in.
  • If an ASG has multiple policies, there is always a chance that both policies can instruct the Auto Scaling to Scale Out or Scale In at the same time.
  • When these situations occur, Auto Scaling chooses the policy that has the greatest impact i.e. provides the largest capacity for both scale out and scale in on the ASG for e.g. if two policies are triggered at the same time and Policy 1 instructs to scale out the instance by 1 while Policy 2 instructs to scale out the instances by 2, Auto Scaling will use the Policy 2 and scale out the instances by 2 as it has a greater impact.

Predictive Scaling

  • Predictive scaling can be used to increase the number of EC2 instances in the ASG in advance of daily and weekly patterns in traffic flows.
  • Predictive scaling is well suited for situations where you have:
    • Cyclical traffic, such as high use of resources during regular business hours and low use of resources during evenings and weekends
    • Recurring on-and-off workload patterns, such as batch processing, testing, or periodic data analysis
    • Applications that take a long time to initialize, causing a noticeable latency impact on application performance during scale-out events
  • Predictive scaling provides proactive scaling that can help scale faster by launching capacity in advance of forecasted load, compared to using only dynamic scaling, which is reactive in nature.
  • Predictive scaling uses machine learning to predict capacity requirements based on historical data from CloudWatch. The machine learning algorithm consumes the available historical data and calculates the capacity that best fits the historical load pattern, and then continuously learns based on new data to make future forecasts more accurate.
  • Predictive scaling supports forecast only mode so that you can evaluate the forecast before you allow predictive scaling to actively scale capacity
  • When you are ready to start scaling with predictive scaling, switch the policy from forecast only mode to forecast and scale mode.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A user has created a web application with Auto Scaling. The user is regularly monitoring the application and he observed that the traffic is highest on Thursday and Friday between 8 AM to 6 PM. What is the best solution to handle scaling in this case?
    1. Add a new instance manually by 8 AM Thursday and terminate the same by 6 PM Friday
    2. Schedule Auto Scaling to scale up by 8 AM Thursday and scale down after 6 PM on Friday
    3. Schedule a policy which may scale up every day at 8 AM and scales down by 6 PM
    4. Configure a batch process to add a instance by 8 AM and remove it by Friday 6 PM
  2. A customer has a website which shows all the deals available across the market. The site experiences a load of 5 large EC2 instances generally. However, a week before Thanksgiving vacation they encounter a load of almost 20 large instances. The load during that period varies over the day based on the office timings. Which of the below mentioned solutions is cost effective as well as help the website achieve better performance?
    1. Keep only 10 instances running and manually launch 10 instances every day during office hours.
    2. Setup to run 10 instances during the pre-vacation period and only scale up during the office time by launching 10 more instances using the AutoScaling schedule.
    3. During the pre-vacation period setup a scenario where the organization has 15 instances running and 5 instances to scale up and down using Auto Scaling based on the network I/O policy.
    4. During the pre-vacation period setup 20 instances to run continuously.
  3. A user has setup Auto Scaling with ELB on the EC2 instances. The user wants to configure that whenever the CPU utilization is below 10%, Auto Scaling should remove one instance. How can the user configure this?
    1. The user can get an email using SNS when the CPU utilization is less than 10%. The user can use the desired capacity of Auto Scaling to remove the instance
    2. Use CloudWatch to monitor the data and Auto Scaling to remove the instances using scheduled actions
    3. Configure CloudWatch to send a notification to Auto Scaling Launch configuration when the CPU utilization is less than 10% and configure the Auto Scaling policy to remove the instance
    4. Configure CloudWatch to send a notification to the Auto Scaling group when the CPU Utilization is less than 10% and configure the Auto Scaling policy to remove the instance

References

Auto_Scaling_Options

Kubernetes Architecture

Kubernetes Architecture

  • A Kubernetes cluster consists of at least one main (control) plane, and one or more worker machines, called nodes.
  • Both the control planes and node instances can be physical devices, virtual machines, or instances in the cloud.
  • In managed Kubernetes environments like AWS EKS, GCP GKE, Azure AKS the control plane is managed by the cloud provider.

Kubernetes Architecture

Control Plane

  • The control plane is also known as a master node or head node.
  • The control plane manages the worker nodes and the Pods in the cluster.
  • In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.
  • It is not recommended to run user workloads on master mode.
  • The Control plane’s components make global decisions about the cluster, as well as detect and respond to cluster events.
  • The control plane receives input from a CLI or UI via an API.

API Server (kube-apiserver)

  • API server exposes a REST interface to the Kubernetes cluster. It is the front end for the Kubernetes control plane.
  • All operations against pods, services, and so forth, are executed programmatically by communicating with the endpoints provided by it.
  • It tracks the state of all cluster components and manages the interaction between them.
  • It is designed to scale horizontally.
  • It consumes YAML/JSON manifest files.
  • It validates and processes the requests made via API.

etcd (key-value store)

  • Etcd is a consistent, distributed, and highly-available key-value store.
  • is stateful, persistent storage that stores all of Kubernetes cluster data (cluster state and config).
  • is the source of truth for the cluster.
  • can be part of the control plane, or, it can be configured externally.
  • ETCD benefits include
    • Fully replicated: Every node in an etcd cluster has access to the full data store.
    • Highly available: etcd is designed to have no single point of failure and gracefully tolerate hardware failures and network partitions.
    • Reliably consistent: Every data ‘read’ returns the latest data ‘write’ across all clusters.
    • Fast: etcd has been benchmarked at 10,000 writes per second.
    • Secure: etcd supports automatic Transport Layer Security (TLS) and optional secure socket layer (SSL) client certificate authentication.
    • Simple: Any application, from simple web apps to highly complex container orchestration engines such as Kubernetes, can read or write data to etcd using standard HTTP/JSON tools.

Scheduler (kube-scheduler)

  • The scheduler is responsible for assigning work to the various nodes. It keeps watch over the resource capacity and ensures that a worker node’s performance is within an appropriate threshold.
  • It schedules pods to worker nodes.
  • It watches api-server for newly created Pods with no assigned node, and selects a healthy node for them to run on.
  • If there are no suitable nodes, the pods are put in a pending state until such a healthy node appears.
  • It watches API Server for new work tasks.
  • Factors taken into account for scheduling decisions include:
    • Individual and collective resource requirements.
    • Hardware/software/policy constraints.
    • Affinity and anti-affinity specifications.
    • Data locality.
    • Inter-workload interference.
    • Deadlines and taints.

Controller Manager (kube-controller-manager)

  • Controller manager is responsible for making sure that the shared state of the cluster is operating as expected.
  • It watches the desired state of the objects it manages and watches their current state through the API server.
  • It takes corrective steps to make sure that the current state is the same as the desired state.
  • It is a controller of controllers.
  • It runs controller processes. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
  • Some types of controllers are:
    • Node controller: Responsible for noticing and responding when nodes go down.
    • Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
    • Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
    • Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.

Cloud Controller Manager

  • The cloud controller manager integrates with the underlying cloud technologies in your cluster when the cluster is running in a cloud environment.
  • The cloud-controller-manager only runs controllers that are specific to your cloud provider.
  • Cloud controller lets you link your cluster into cloud provider’s API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
  • The following controllers can have cloud provider dependencies:
    • Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding.
    • Route controller: For setting up routes in the underlying cloud infrastructure.
    • Service controller: For creating, updating, and deleting cloud provider load balancers.

Data Plane Worker Node(s)

  • The data plane is known as the worker node or compute node.
  • A virtual or physical machine that contains the services necessary to run containerized applications.
  • A Kubernetes cluster needs at least one worker node, but normally has many.
  • The worker node(s) host the Pods that are the components of the application workload.
  • Pods are scheduled and orchestrated to run on nodes.
  • Cluster can be scaled up and down by adding and removing nodes.
  • Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.

kubelet

  • A Kubelet tracks the state of a pod to ensure that all the containers are running and healthy
  • provides a heartbeat message every few seconds to the control plane.
  • runs as an agent on each node in the cluster.
  • acts as a conduit between the API server and the node.
  • instantiates and executes Pods.
  • watches API Server for work tasks.
  • gets instructions from master and reports back to Masters.

kube-proxy

  • Kube proxy is a networking component that routes traffic coming into a node from the service to the correct containers.
  • is a network proxy that runs on each node in a cluster.
  • manages IP translation and routing.
  • maintains network rules on nodes. These network rules allow network communication to Pods from inside or outside of cluster.
  • ensures each Pod gets a unique IP address.
  • makes possible that all containers in a pod share a single IP.
  • facilitates Kubernetes networking services and load-balancing across all pods in a service.
  • It deals with individual host sub-netting and ensures that the services are available to external parties.

Container runtime

  • Container runtime is responsible for running containers (in Pods).
  • Kubernetes supports any implementation of the Kubernetes Container Runtime Interface CRI specifications
  • To run the containers, each worker node has a container runtime engine.
  • It pulls images from a container image registry and starts and stops containers.
  • Kubernetes supports several container runtimes:

 

Amazon SQS Features

Amazon SQS Features

  • Visibility timeout defines the period where SQS blocks the visibility of the message and prevents other consuming components from receiving and processing that message.
  • Dead-letter queues – DLQ helps source queues (Standard and FIFO) target messages that can’t be processed (consumed) successfully.
  • DLQ Redrive policy specifies the source queue, the dead-letter queue, and the conditions under which messages are moved from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.
  • Short and Long polling control how the queues would be polled and Long polling help reduce empty responses.

Queue and Message Identifiers

Queue URLs

  • Queue is identified by a unique queue name within the same AWS account
  • Each queue is assigned with a Queue URL identifier for e.g. http://sqs.us-east-1.amazonaws.com/123456789012/queue2
  • Queue URL is needed to perform any operation on the Queue.

Message ID

  • Message IDs are useful for identifying messages
  • Each message receives a system-assigned message ID that is returned with the SendMessage response.
  • To delete a message, the message’s receipt handle instead of the message ID is needed
  • Message ID can be of is 100 characters max

Receipt Handle

  • When a message is received from a queue, a receipt handle is returned with the message which is associated with the act of receiving the message rather than the message itself.
  • Receipt handle is required, not the message id, to delete a message or to change the message visibility.
  • If a message is received more than once, each time it is received, a different receipt handle is assigned and the latest should be used always.

Message Deduplication ID

  • Message Deduplication ID is used for the deduplication of sent messages.
  • Message Deduplication ID is applicable for FIFO queues.
  • If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren’t delivered during the 5-minute deduplication interval.

Message Group ID

  • Message Group ID specifies that a message belongs to a specific message group.
  • Message Group ID is applicable for FIFO queues.
  • Messages that belong to the same message group are always processed one by one, in a strict order relative to the message group.
  • However, messages that belong to different message groups might be processed out of order.

Visibility timeout

Screen Shot 2016-05-05 at 8.17.04 AM.png

  • SQS does not delete the message once it is received by a consumer, because the system is distributed, there’s no guarantee that the consumer will actually receive the message (it’s possible the connection could break or the component could fail before receiving the message)
  • The consumer should explicitly delete the message from the Queue once it is received and successfully processed.
  • As the message is still available in the Queue, other consumers would be able to receive and process and this needs to be prevented.
  • SQS handles the above behavior using Visibility timeout.
  • SQS blocks the visibility of the message for the Visibility timeout period, which is the time during which SQS prevents other consuming components from receiving and processing that message.
  • Consumer should delete the message within the Visibility timeout. If the consumer fails to delete the message before the visibility timeout expires, the message is visible again to other consumers.
  • Once Visible the message is available for other consumers to consume and can lead to duplicate messages.
  • Visibility timeout considerations
    • Clock starts ticking once SQS returns the message
    • should be large enough to take into account the processing time for each message
    • default Visibility timeout for each Queue is 30 seconds and can be changed at the Queue level
    • when receiving messages, a special visibility timeout for the returned messages can be set without changing the overall queue timeout using the receipt handle
    • can be extended by the consumer, using ChangeMessageVisibility , if the consumer thinks it won’t be able to process the message within the current visibility timeout period. SQS restarts the timeout period using the new value.
    • a message’s Visibility timeout extension applies only to that particular receipt of the message and does not affect the timeout for the queue or later receipts of the message
  • SQS has a 120,000 limit for the number of inflight messages per queue i.e. messages received but not yet deleted and any further messages would receive an error after reaching the limit

Message Lifecycle

Screen Shot 2016-05-05 at 8.16.39 AM.png

  1. Component 1 sends Message A to a queue, and the message is redundantly distributed across the SQS servers.
  2. When Component 2 is ready to process a message, it retrieves messages from the queue, and Message A is returned. While Message A is being processed, it remains in the queue but is not returned to subsequent receive requests for the duration of the visibility timeout.
  3. Component 2 deletes Message A from the queue to avoid the message being received and processed again once the visibility timeout expires.

SQS Dead Letter Queues – DLQ

  • SQS supports dead-letter queues (DLQ), which other queues (source queues – Standard and FIFO) can target for messages that can’t be processed (consumed) successfully.
  • Dead-letter queues are useful for debugging the application or messaging system because DLQ help isolates unconsumed messages to determine why their processing doesn’t succeed.
  • DLQ redrive policy
    • specifies the source queue, the dead-letter queue, and the conditions under which SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.
    • specifies which source queues can access the dead-letter queue.
    • also helps move the messages back to the source queue.
  • SQS does not create the dead-letter queue automatically. DLQ must first be created before being used.
  • DLQ for the source queue should be of the same type i.e. Dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue.
  • DLQ should be in the same account and region as the source queue.

SQS Dead Letter Queue - Redrive Policy

SQS Delay Queues

  • Delay queues help postpone the delivery of new messages to consumers for a number of seconds
  • Messages sent to the delay queue remain invisible to consumers for the duration of the delay period.
  • Minimum delay is 0 seconds (default) and the Maximum is 15 minutes.
  • Delay queues are similar to visibility timeouts as both features make messages unavailable to consumers for a specific period of time.
  • The difference between the two is that, for delay queues, a message is hidden when it is first added to the queue, whereas for visibility timeouts a message is hidden only after it is consumed from the queue.

Short and Long polling

SQS provides short polling and long polling to receive messages from a queue.

Short Polling

  • ReceiveMessage request queries only a subset of the servers (based on a weighted random distribution) to find messages that are available to include in the response.
  • SQS sends the response right away, even if the query found no messages.
  • By default, queues use short polling.

Long Polling

  • ReceiveMessage request queries all of the servers for messages.
  • SQS sends a response after it collects at least one available message, up to the maximum number of messages specified in the request.
  • SQS sends an empty response only if the polling wait time expires.
  • Wait time greater than 0 triggers long polling with a max of 20 secs.
  • Long polling helps
    • reduce the cost of using SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request)
    • reduce false empty responses (when messages are available but aren’t included in a response).
    • Return messages as soon as they become available.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. How does Amazon SQS allow multiple readers to access the same message queue without losing messages or processing them many times?
    1. By identifying a user by his unique id
    2. By using unique cryptography
    3. Amazon SQS queue has a configurable visibility timeout
    4. Multiple readers can’t access the same message queue
  2. If a message is retrieved from a queue in Amazon SQS, how long is the message inaccessible to other users by default?
    1. 0 seconds
    2. 1 hour
    3. 1 day
    4. forever
    5. 30 seconds
  3. When a Simple Queue Service message triggers a task that takes 5 minutes to complete, which process below will result in successful processing of the message and remove it from the queue while minimizing the chances of duplicate processing?
    1. Retrieve the message with an increased visibility timeout, process the message, delete the message from the queue
    2. Retrieve the message with an increased visibility timeout, delete the message from the queue, process the message
    3. Retrieve the message with increased DelaySeconds, process the message, delete the message from the queue
    4. Retrieve the message with increased DelaySeconds, delete the message from the queue, process the message
  4. You need to process long-running jobs once and only once. How might you do this?
    1. Use an SNS queue and set the visibility timeout to long enough for jobs to process.
    2. Use an SQS queue and set the reprocessing timeout to long enough for jobs to process.
    3. Use an SQS queue and set the visibility timeout to long enough for jobs to process.
    4. Use an SNS queue and set the reprocessing timeout to long enough for jobs to process.
  5. You are getting a lot of empty receive requests when using Amazon SQS. This is making a lot of unnecessary network load on your instances. What can you do to reduce this load?
    1. Subscribe your queue to an SNS topic instead.
    2. Use as long of a poll as possible, instead of short polls. 
    3. Alter your visibility timeout to be shorter.
    4. Use sqsd on your EC2 instances.
  6. Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number of empty responses?
    1. Set the imaging queue visibility Timeout attribute to 20 seconds
    2. Set the Imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds (Long polling. Refer link)
    3. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds
    4. Set the DelaySeconds parameter of a message to 20 seconds