AWS Certified Solution Architect – Associate Exam Learning Path

AWS Certified Solution Architect – Associate Exam Learning Path

AWS Solution Architect – Associate exam basically validates the following 2 abilities

  • Identify and gather requirements in order to define a solution to be built using architecture best practices.
  • Provide guidance on architectural best practices to developers and system administrators throughout the lifecycle of the project.

Refer to the AWS Solution Architect – Associate Exam Blue Print

AWS Solution Architect - Associate Exam Break up

AWS Cloud Computing Whitepapers

AWS Solution Architect – Associate Exam Contents

NOTE: With recent feedback from users AWS SA-A Exams have questions for new Lambda, ALB, ALB vs Classic Load Balancer, ECS, API Gateway services

Domain 1.0: Designing highly available, cost-efficient, fault-tolerant, scalable systems

  1. Identify and recognize cloud architecture considerations, such as fundamental components and effective designs. Content may include the following:

2 Domain 2.0: Implementation/Deployment

  1. Identify the appropriate techniques and methods using Amazon EC2, Amazon S3, AWS Elastic Beanstalk, AWS CloudFormation, AWS OpsWorks, Amazon Virtual Private Cloud (VPC), and AWS Identity and Access Management (IAM) to code and implement a cloud solution.
    Content may include the following:

    1. Configure an Amazon Machine Image (AMI)
    2. Operate and extend service management in a hybrid IT architecture
    3. Configure services to support compliance requirements in the cloud
    4. Launch instances across the AWS global infrastructure
    5. Configure IAM policies and best practices

3 Domain 3.0: Data Security

  1. Recognize and implement secure practices for optimum cloud deployment and maintenance. Content may include the following:
  2. Recognize critical disaster recovery techniques and their implementation.
    Content may include the following:

4 Domain 4.0: Troubleshooting

  1. Content may include the following:

NOTE: I have just marked the topics inline with the AWS Exam Blue Print. So be sure to check the same, as it is updated regularly and go through Whitepapers, FAQs and Re-Invent videos.

AWS Solution Architect – Associate Exam Resources

Braincert-AWS-Certified-SA-Associate-Practice-Exam

Udemy AWS Certified Solution Architect - Associate Practice Tests

  • Purchased the acloud guru AWS Certified Solutions Architect – Associate course from Udemy (should get it for $10-$15 on discount) helps to get a clear picture of the the format, topics and relevant sections
  • Opinion : acloud guru course are good by itself but is not sufficient to pass the exam but might help to counter about 50-60% of exam questions
  • You can also check new course on Udemy AWS Certified Solutions Architect Associate Exam which covers the exam topics in detail, scenario based practice questions and visual aids.
  • Signed up with AWS for the Free Tier account which provides a lot of the Services to be tried for free with certain limits which are more then enough to get things going. Be sure to decommission anything, if you using any thing beyond the free limits, preventing any surprises 🙂
  • Also, used the QwikLabs for all the introductory courses which are free and allow you to try out the services multiple times (I think its max 5, as I got the warnings couple of times)
  • Update: Qwiklabs seems to have reduced the free courses quite a lot and now provide targeted labs for AWS Certification exams which are charged
  • Read the FAQs atleast for the important topics, as they cover important points and are good for quick review
  • Did not purchase the AWS Practice exams, as the questions are available all around. But if you want to check the format, it might be useful.
  • You can also check practice tests

AWS Elasticsearch – Certification

AWS Elasticsearch

  • Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS Cloud.
  • Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analytics
  • Elasticsearch provides
    • real-time, distributed search and analytics engine
    • ability to provision all the resources for Elasticsearch cluster and launches the cluster
    • easy to use cluster scaling options
    • provides self-healing clusters, which automatically detects and replaces failed Elasticsearch nodes, reducing the overhead associated with self-managed infrastructures
    • domain snapshots to back up and restore ES domains and replicate domains across AZs
    • data durability
    • enhanced security with IAM access control
    • node monitoring
    • multiple configurations of CPU, memory, and storage capacity, known as instance types
    • storage volumes for the data using EBS volumes
    • Multiple geographical locations for your resources, known as regions and Availability Zones
    • ability to span cluster nodes across two AZs in the same region, known as zone awareness,  for high availability and redundancy
    • dedicated master nodes to improve cluster stability
    • data visualization using the Kibana tool
    • integration with CloudWatch for monitoring ES domain metrics
    • integration with CloudTrail for auditing configuration API calls to ES domains
    • integration with S3, Kinesis, and DynamoDB for loading streaming data
    • ability to handle structured and Unstructured data
    • HTTP Rest APIs
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You need to perform ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first?
    1. AWS Elasticsearch Service (Elasticsearch Service (ES) is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics. Refer link)
    2. AWS RedShift
    3. AWS EMR
    4. AWS DynamoDB
  2. You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?
    1. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues.
    2. Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc MapReduce analysis and write new queries when needed.
    3. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.
    4. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (AWS Elasticsearch with Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)

AWS Certification Exam Resources, Courses, Quizzes

AWS Certification Exam Courses, Resources, Quizzes

  • Clearing the AWS certification for Solution Architect, SysOps Associate and Solution Architect Professional has been a long journey of over an year now.
  • I always remember starting fresh on AWS with no knowledge and a plethora of resources, courses and documentation can be very confusing, overwhelming and tough
  • So I have just put some resources, courses and deals which might help you get started at a reasonable cost

NOTE: This is my personal recommendations and tried & tested ones.

AWS documentation

  • Nothing can replace the fantastic AWS documentation that the team has put and maintained
  • AWS documentation includes
    • AWS Developer, User guides
    • AWS FAQs – Very Important to get a quick summary for important questions targeted in the exams
    • AWS Re-Invent Videos – quick way to know details of the services
    • AWS Whitepapers – covers condensed knowledge of important topics and services

Online Courses

Udemy

      • However, they are not sufficient to clear the exams
      • Udemy does not have aCloud Guru professional courses
      • They are listed at a very high price, however, wait for offers from Udemy and you can get the Associate ones for $10-$15
      • I will keep on listing any Udemy offers as belowFor Associate, I started with aCloud Guru courses from Udemy and they provide a nice overview of the exam topics



A Cloud Guru

  • As mentioned above, Associate courses from A Cloud Guru are good to get started and can be purchased from Udemy
  • A Cloud Guru forums have very nice discussion over the topics, highly recommended going through them
  • I had purchased Solution Architect – Professional course from A Cloud Guru site directly
    • Personally, I find it very expensive and it does not cover the topics in great details

Linux Academy

  • I haven’t tried Linux Academy courses for Associate, so any of you have any opinion let me
  • I had purchased the Solution Architect – Professional course and found is detailed and exhaustive with labs
  • Personally, would recommend it over the A Cloud Guru
  • You can try Linux Academy Trail for 7 days and then for monthly $29 which would give you access to everything but limited period

Free Linux Academy, PluralSight and Opsgility courses

  • I started preparing for Azure and was checking for resources, and stumbled upon 3 months Free subscription for LinuxAcademy, PluralSight and OpsUtility.
  • Follow the steps below
    • Navigate to Visual Studio Dev Essentials
    • Click on Join or Access Now
    • Sign up as its free
    • Microsoft would provide 3 months access to the courses as their Education Program
    • Activate the code and you are good to go
    • Enjoy the same till is lasts

Free Subscription for Linux Academy, Opsgility, Pluralsight

Practice Quiz

  • Personally, I have not taken any Practice test either officially from AWS or from any other provider
  • However, there are lot of sites, apart from my blog, which provide AWS questions & Answers, but I had found them to provide incorrect answers. So always research from your side
  • I have got a lot of positive feedback from colleagues taking tests on Whizlabs. Currently they have for Associate exams, but are coming up for Professional ones as well.


Udemy AWS Certified Solution Architect - Associate Practice Tests

  • Any other Online Quiz which you found very useful, let me know and I can add the same

Feel free to provide any feedback or any other resources that you found very helpful and help back the community.

AWS Cloud Migration Services – Certification

AWS Cloud Migration Services

  • AWS Cloud Migration services help to address a lot of common use cases such as
    • cloud migration,
    • disaster recovery,
    • data center decommission, and
    • content distribution.
  • For migrating data from On Premises to AWS, the major aspect for considerations are
    • amount of data and network speed
    • data security in transit
    • existing application knowledge for recreation

NOTE: Topic mainly for Professional Exam Only

VPN

  • connection utilizes IPSec to establish encrypted network connectivity between on-premises network and VPC over the Internet.
  • connections can be configured in minutes and a good solution for an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
  • still requires internet and be configured using VGW and CGW

AWS EC2 VM Import/Export

  • allows easy import of virtual machine images from existing environment to EC2 instances and export them back to on-premises environment
  • allows leveraging of existing investments in the virtual machines, built to meet compliance requirements, configuration management and IT security by bringing those virtual machines into EC2 as ready-to-use instances
  • Common usages include
    • Migrate Existing Applications and Workloads to EC2, allows to preserve software and settings that configured in the existing VMs
    • Copy Your VM Image Catalog to Amazon EC2
    • Create a Disaster Recovery Repository for your VM images

AWS Direct Connect

  • provides a dedicated physical connection between the corporate network and AWS Direct Connect location with no data transfer over the Internet.
  • helps bypass Internet service providers (ISPs) in the network path
  • helps reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than with Internet-based connection
  • takes time to setup and involves third parties
  • are not redundant and would need another direct connect connection or a VPN connection
  •  Security
    • provides a dedicated physical connection without internet
    • For additional security can be used with VPN

AWS Import/Export (upgraded to Snowball)

  • accelerates moving large amounts of data into and out of AWS using secure Snowball appliances
  • AWS transfers the data directly onto and off of the storage devices using Amazon’s high-speed internal network, bypassing the Internet
  • Data Migration
    • for significant data size, AWS Import/Export is faster than Internet transfer is and more cost-effective than upgrading the connectivity
    • if loading the data over the Internet would take a week or more, AWS Import/Export should be considered
    • data from appliances can be imported to S3, Glacier and EBS volumes and exported from S3
    • not suitable for applications that cannot tolerate offline transfer time
  •  Security
    • Snowball uses an industry-standard Trusted Platform Module (TPM) that has a dedicated processor designed to detect any unauthorized modifications to the hardware, firmware, or software to physically secure the AWS Snowball device.

AWS Storage Gateway

  • connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and the AWS storage infrastructure
  • provides low-latency performance by maintaining frequently accessed data on-premises while securely storing all of the data encrypted in S3 or Glacier.
  • for disaster recovery scenarios, Storage Gateway, together with EC2, can serve as a cloud-hosted solution that mirrors the entire production environment
  • Data Migration
    • with gateway-cached volumes, S3 can be used to hold primary data while frequently accessed data is cached locally for faster access reducing the need to scale on premises storage infrastructure
    • with gateway-stored volumes, entire data is stored locally while asynchronously backing up data to S3
    • with gateway-VTL, offline data archiving can be performed by presenting existing backup application with an iSCSI-based VTL consisting of a virtual media changer and virtual tape drives
  •  Security
    • Encrypts all data in transit to and from AWS by using SSL/TLS.
    • All data in AWS Storage Gateway is encrypted at rest using AES-256.
    • Authentication between the gateway and iSCSI initiators can be secured by using Challenge-Handshake Authentication Protocol (CHAP).

S3

  • Data Transfer
    • Files up to 5GB can be transferred using single operation
    • Multipart uploads can be used to upload files up to 5 TB and speed up data uploads by dividing the file into multiple parts
    • transfer rate still limited by the network speed
  •  Security
    • Data in transit can be secured by using SSL/TLS or client-side encryption.
    • Encrypt data at-rest by performing server-side encryption using Amazon S3-Managed Keys (SSE-S3), AWS Key Management Service (KMS)-Managed Keys (SSE-KMS), or Customer Provided Keys (SSE-C). Or by performing client-side encryption using AWS KMS–Managed Customer Master Key (CMK) or Client-Side Master Key.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your must architect the migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location. What options could you select to migrate the application to AWS? (Choose 2)
    1. Create an AWS Elastic Beanstalk application using the custom web server platform. Specify the web server executable and the application project and source files. Enable log file rotation to Amazon Simple Storage Service (S3). (EB does not work with Custom server executable)
    2. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a custom layer. Create custom recipes to install Docker and to deploy your Docker container using the Dockerfile. Create custom recipes to install and configure the application to publish the logs to Amazon CloudWatch Logs (although this is one of the option, the last sentence mentions configure the application to push the logs to S3, which would need changes to application as it needs to use SDK or CLI)
    3. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a Docker layer that uses the Dockerfile. Create custom recipes to install and configure Amazon Kinesis to publish the logs into Amazon CloudWatch. (Kinesis not needed)
    4. Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Use Docker configuration with awslogs and EB with Docker)
    5. Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI. (Use VM Import/Export to create AMI and CloudWatch logs agent to log)
  2. Your company hosts an on-premises legacy engineering application with 900GB of data shared via a central file server. The engineering data consists of thousands of individual files ranging in size from megabytes to multiple gigabytes. Engineers typically modify 5-10 percent of the files a day. Your CTO would like to migrate this application to AWS, but only if the application can be migrated over the weekend to minimize user downtime. You calculate that it will take a minimum of 48 hours to transfer 900GB of data using your company’s existing 45-Mbps Internet connection. After replicating the application’s environment in AWS, which option will allow you to move the application’s data to AWS without losing any data and within the given timeframe?
    1. Copy the data to Amazon S3 using multiple threads and multi-part upload for large files over the weekend, and work in parallel with your developers to reconfigure the replicated application environment to leverage Amazon S3 to serve the engineering files. (Still limited by 45 Mbps speed with minimum 48 hours when utilized to max)
    2. Sync the application data to Amazon S3 starting a week before the migration, on Friday morning perform a final sync, and copy the entire data set to your AWS file server after the sync completes. (Works best as the data changes can be propagated over the week and are fractional and downtime would be know)
    3. Copy the application data to a 1-TB USB drive on Friday and immediately send overnight, with Saturday delivery, the USB drive to AWS Import/Export to be imported as an EBS volume, mount the resulting EBS volume to your AWS file server on Sunday. (Downtime is not known when the data upload would be done, although Amazon says the same day the package is received)
    4. Leverage the AWS Storage Gateway to create a Gateway-Stored volume. On Friday copy the application data to the Storage Gateway volume. After the data has been copied, perform a snapshot of the volume and restore the volume as an EBS volume to be attached to your AWS file server on Sunday. (Still uses the internet)
  3. You are tasked with moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there’s no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose 3 answers)
    1. An AWS Direct Connect link between the VPC and the network housing the internal services
    2. An Internet Gateway to allow a VPN connection. (Virtual and Customer gateway is needed)
    3. An Elastic IP address on the VPC instance
    4. An IP address space that does not conflict with the one on-premises
    5. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies’ IP addresses
    6. A VM Import of the current virtual machine

References

AWS Automated Backups – Certification

AWS Automated Backups

  • AWS allows automated backups for
    • RDS
    • ElastiCache – Redis only
    • Redshift
  • AWS does not perform automated backups for EC2 EBS volumes and needs to be manually scripted
  • AWS stores the backups and snapshots in S3

RDS Backups

  • RDS supports automated backups as well as manual snapshots
  • Automated Backups
    • enable point-in-time recovery of the DB Instance
    • perform a full daily backup and captures transaction logs (as updates to your DB instance are made
    • are performed during the defined preferred backup window and is retained for user-specified period of time called the retention period (default 1 day with a max of 35 days)
    • When a point-in-time recovery is initiated, transaction logs are applied to the most appropriate daily backup in order to restore the DB instance to the specific requested time.
    • allows a point-in-time restore and an ability to specify any second during the retention period, up to the Latest Restorable Time
    • are deleted when the DB instance is deleted
  • Snapshots
    • are user-initiated and enable to back up the DB instance in a known state as frequently as needed, and then restored to that specific state at any time.
    • can be created with the AWS Management Console or by using the CreateDBSnapshot API call.
    • are not deleted when the DB instance is deleted
  • Automated backups and snapshots can result in a performance hit, if Multi-AZ is not enabled

ElastiCache Automated Backups

  • ElastiCache supports Automated backups for Redis cluster only
  • ElastiCache creates a backup of the cluster on a daily basis
  • Snapshot will degrade performance, so should be performed during least bust part of the day
  • Backups are performed during the Backup period and retained for backup retention limit defined, with a maximum of 35 days
  • ElastiCache also allows manual snapshots of the cluster

Redshift Automated Backups

  • Amazon Redshift enables automated backups, by default
  • Redshift replicates all the data within your data warehouse cluster when it is loaded and also continuously backs up the data to S3
  • Redshift retains backups for 1 day which can be extended to max 35 days
  • Redshift only backs up data that has changed and are incremental so most snapshots use up a small amount of storage
  • Redshift also allows manual snapshots of the data warehouse

EC2 EBS Backups

  • EBS does not provide automated backups
  • EBS snapshots can be created by using the AWS Management Console, the command line interface (CLI), or the APIs
  • Backups degrade performance
  • Stored on S3
  • EBS Snapshots are incremental and block-based, and they consume space only for changed data after the initial snapshot is created
  • Data can be restored from snapshots by created a volume from the snapshot
  • EBS snapshots are region specific and can be copied between AWS regions

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which two AWS services provide out-of-the-box user configurable automatic backup-as-a-service and backup rotation options? Choose 2 answers
    1. Amazon S3
    2. Amazon RDS
    3. Amazon EBS
    4. Amazon Redshift
  2. You have been asked to automate many routine systems administrator backup and recovery activities. Your current plan is to leverage AWS-managed solutions as much as possible and automate the rest with the AWS CLI and scripts. Which task would be best accomplished with a script?
    1. Creating daily EBS snapshots with a monthly rotation of snapshots
    2. Creating daily RDS snapshots with a monthly rotation of snapshots
    3. Automatically detect and stop unused or underutilized EC2 instances
    4. Automatically add Auto Scaled EC2 instances to an Amazon Elastic Load Balancer

AWS Billing and Cost Management – Certification

AWS Billing and Cost Management

  • AWS Billing and Cost Management is the service that you use to pay AWS bill, monitor your usage, and budget your costs

Analyzing Costs with Graphs

  • AWS provides Cost Explorer tool which allows filter graphs by API operations, Availability Zones, AWS service, custom cost allocation tags, EC2 instance type, purchase options, region, usage type, usage type groups, or, if Consolidated Billing used, by linked account.

Budgets

  • Budgets can be used to track AWS costs to see usage-to-date and current estimated charges from AWS
  • Budgets use the cost visualization provided by Cost Explorer to show the status of the budgets and to provide forecasts of your estimated costs.
  • Budgets can be used to create CloudWatch alarms that notify when you go over your budgeted amounts, or when the estimated costs exceed budgets
  • Notifications can be sent to an SNS topic and to email addresses associated with your budget notification

Cost Allocation Tags

  • Tags can be used to organize AWS resources, and cost allocation tags to track the AWS costs on a detailed level.
  • Upon cost allocation tags activation, AWS uses the cost allocation tags to organize the resource costs on the cost allocation report making it easier to categorize and track your AWS costs.
  • AWS provides two types of cost allocation tags,
    • an AWS-generated tag AWS defines, creates, and applies the AWS-generated tag for you,
    • and user-defined tags that you define, create,
  • Both types of tags must be activated separately before they can appear in Cost Explorer or on a cost allocation report

Alerts on Cost Limits

  • CloudWatch can be used to create billing alerts when the AWS costs exceed specified thresholds
  • When the usage exceeds threshold amounts, AWS sends an email notification

Consolidated Billing

Refer to My Blog Post about Consolidated Billing

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An organization is using AWS since a few months. The finance team wants to visualize the pattern of AWS spending. Which of the below AWS tool will help for this requirement?
    • AWS Cost Manager
    • AWS Cost Explorer (Check Cost Explorer)
    • AWS CloudWatch
    • AWS Consolidated Billing (Will not help visualize)
  2. Your company wants to understand where cost is coming from in the company’s production AWS account. There are a number of applications and services running at any given time. Without expending too much initial development time, how best can you give the business a good understanding of which applications cost the most per month to operate?
    1. Create an automation script, which periodically creates AWS Support tickets requesting detailed intra-month information about your bill.
    2. Use custom CloudWatch Metrics in your system, and put a metric data point whenever cost is incurred.
    3. Use AWS Cost Allocation Tagging for all resources, which support it. Use the Cost Explorer to analyze costs throughout the month. (Refer link)
    4. Use the AWS Price API and constantly running resource inventory scripts to calculate total price based on multiplication of consumed resources over time.
  3. You need to know when you spend $1000 or more on AWS. What’s the easy way for you to see that notification?
    1. AWS CloudWatch Events tied to API calls, when certain thresholds are exceeded, publish to SNS.
    2. Scrape the billing page periodically and pump into Kinesis.
    3. AWS CloudWatch Metrics + Billing Alarm + Lambda event subscription. When a threshold is exceeded, email the manager.
    4. Scrape the billing page periodically and publish to SNS.
  4. A user is planning to use AWS services for his web application. If the user is trying to set up his own billing management system for AWS, how can he configure it?
    1. Set up programmatic billing access. Download and parse the bill as per the requirement
    2. It is not possible for the user to create his own billing management service with AWS
    3. Enable the AWS CloudWatch alarm which will provide APIs to download the alarm data
    4. Use AWS billing APIs to download the usage report of each service from the AWS billing console
  5. An organization is setting up programmatic billing access for their AWS account. Which of the below mentioned services is not required or enabled when the organization wants to use programmatic access?
    1. Programmatic access
    2. AWS bucket to hold the billing report
    3. AWS billing alerts
    4. Monthly Billing report
  6. A user has setup a billing alarm using CloudWatch for $200. The usage of AWS exceeded $200 after some days. The user wants to increase the limit from $200 to $400? What should the user do?
    1. Create a new alarm of $400 and link it with the first alarm
    2. It is not possible to modify the alarm once it has crossed the usage limit
    3. Update the alarm to set the limit at $400 instead of $200 (Refer link)
    4. Create a new alarm for the additional $200 amount
  7. A user is trying to configure the CloudWatch billing alarm. Which of the below mentioned steps should be performed by the user for the first time alarm creation in the AWS Account Management section?
    1. Enable Receiving Billing Reports
    2. Enable Receiving Billing Alerts
    3. Enable AWS billing utility
    4. Enable CloudWatch Billing Threshold

References

AWS_Billing_&_Cost_Management – User_Guide

AWS RDS Monitoring & Notification – Certification

AWS RDS Monitoring & Notification

  • RDS integrates with CloudWatch and provides metrics for monitoring
  • CloudWatch alarms can be created over a single metric that sends an SNS message when the alarm changes state
  • RDS also provides SNS notification whenever any RDS event occurs

CloudWatch RDS Monitoring

  • RDS DB instance can be monitored using CloudWatch, which collects and processes raw data from RDS into readable, near real-time metrics.
  • The statistics are recorded for a period of two weeks, so that you can access historical information and gain a better perspective on how the service is performing.
  • By default, RDS metric data is automatically sent to Amazon CloudWatch in 1-minute periods
  • CloudWatch RDS Metrics
    • BinLogDiskUsage – Amount of disk space occupied by binary logs on the master. Applies to MySQL read replicas.
    • CPUUtilization – Percentage of CPU utilization.
    • DatabaseConnections – Number of database connections in use.
    • DiskQueueDepth – The number of outstanding IOs (read/write requests) waiting to access the disk.
    • FreeableMemory – Amount of available random access memory.
    • FreeStorageSpace – Amount of available storage space.
    • ReplicaLag – Amount of time a Read Replica DB instance lags behind the source DB instance. Applies to MySQL, MariaDB, and PostgreSQL Read Replicas.
    • SwapUsage – Amount of swap space used on the DB instance.
    • ReadIOPS – Average number of disk I/O operations per second.
    • WriteIOPS – Average number of disk I/O operations per second.
    • ReadLatency – Average amount of time taken per disk I/O operation.
    • WriteLatency – Average amount of time taken per disk I/O operation.
    • ReadThroughput – Average number of bytes read from disk per second.
    • WriteThroughput – Average number of bytes written to disk per second.
    • NetworkReceiveThroughput – Incoming (Receive) network traffic on the DB instance, including both customer database traffic and Amazon RDS traffic used for monitoring and replication.
    • NetworkTransmitThroughput – Outgoing (Transmit) network traffic on the DB instance, including both customer database traffic and Amazon RDS traffic used for monitoring and replication.

RDS Event Notification

  • RDS uses the SNS to provide notification when an RDS event occurs
  • RDS groups the events into categories, which can be subscribed so that a notification is sent when an event in that category occurs.
  • Event category for a DB instance, DB cluster, DB snapshot, DB cluster snapshot, DB security group or for a DB parameter group can be subscribed
  • Event notifications are sent to the email addresses provided during subscription creation
  • Subscription can be easily turn off notification without deleting a subscription by setting the Enabled radio button to No in the RDS console or by setting the Enabled parameter to false using the CLI or RDS API.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You run a web application with the following components Elastic Load Balancer (ELB), 3 Web/Application servers, 1 MySQL RDS database with read replicas, and Amazon Simple Storage Service (Amazon S3) for static content. Average response time for users is increasing slowly. What three CloudWatch RDS metrics will allow you to identify if the database is the bottleneck? Choose 3 answers
    1. The number of outstanding IOs waiting to access the disk
    2. The amount of write latency
    3. The amount of disk space occupied by binary logs on the master.
    4. The amount of time a Read Replica DB Instance lags behind the source DB Instance
    5. The average number of disk I/O operations per second.
  2. Typically, you want your application to check whether a request generated an error before you spend any time processing results. The easiest way to find out if an error occurred is to look for an __________ node in the response from the Amazon RDS API.
    1. Incorrect
    2. Error
    3. FALSE
  3. In the Amazon CloudWatch, which metric should I be checking to ensure that your DB Instance has enough free storage space?
    1. FreeStorage
    2. FreeStorageSpace
    3. FreeStorageVolume
    4. FreeDBStorageSpace
  4. A user is receiving a notification from the RDS DB whenever there is a change in the DB security group. The user does not want to receive these notifications for only a month. Thus, he does not want to delete the notification. How can the user configure this?
    1. Change the Disable button for notification to “Yes” in the RDS console
    2. Set the send mail flag to false in the DB event notification console
    3. The only option is to delete the notification from the console
    4. Change the Enable button for notification to “No” in the RDS console
  5. A sys admin is planning to subscribe to the RDS event notifications. For which of the below mentioned source categories the subscription cannot be configured?
    1. DB security group
    2. DB snapshot
    3. DB options group
    4. DB parameter group
  6. A user is planning to setup notifications on the RDS DB for a snapshot. Which of the below mentioned event categories is not supported by RDS for this snapshot source type?
    1. Backup (Refer link)
    2. Creation
    3. Deletion
    4. Restoration
  7. A system admin is planning to setup event notifications on RDS. Which of the below mentioned services will help the admin setup notifications?
    1. AWS SES
    2. AWS Cloudtrail
    3. AWS CloudWatch
    4. AWS SNS
  8. A user has setup an RDS DB with Oracle. The user wants to get notifications when someone modifies the security group of that DB. How can the user configure that?
    1. It is not possible to get the notifications on a change in the security group
    2. Configure SNS to monitor security group changes
    3. Configure event notification on the DB security group
    4. Configure the CloudWatch alarm on the DB for a change in the security group
  9. It is advised that you watch the Amazon CloudWatch “_____” metric (available via the AWS Management Console or Amazon Cloud Watch APIs) carefully and recreate the Read Replica should it fall behind due to replication errors.
    1. Write Lag
    2. Read Replica
    3. Replica Lag
    4. Single Replica

AWS RDS Security – Certification

AWS RDS Security

  • AWS provides multiple features to provide RDS security
    • DB instance can be hosted in a VPC for the greatest possible network access control
    • IAM policies can be used to assign permissions that determine who is allowed to manage RDS resources
    • Security groups allow to control what IP addresses or EC2 instances can connect to the databases on a DB instance
    • Secure Socket Layer (SSL) connections with DB instances
    • RDS encryption to secure RDS instances and snapshots at rest.
    • Network encryption and transparent data encryption (TDE) with Oracle DB instances

RDS Authentication and Access Control

  • IAM can be used to control which RDS operations each individual user has permission to call

Encrypting RDS Resources

  • RDS encrypted instances use the industry standard AES-256 encryption algorithm to encrypt data on the server that hosts the RDS instance
  • RDS then handles authentication of access and decryption of this data with a minimal impact on performance, and with no need to modify your database client applications
  • Data at Rest Encryption
    • can be enabled on RDS instances to encrypt the underlying storage
    • encryption keys are managed by KMS
    • can be enabled only during instance creation
    • once enabled, the encryption keys cannot be changed
    • if the key is lost, the DB can only be restored from the backup
  • Once encryption is enabled for an RDS instance,
    • logs are encrypted
    • snapshots are encrypted
    • automated backups are encrypted
    • read replicas are encrypted
  • Cross region replicas and snapshots copy does not work since the key is only available in a single region
  • RDS DB Snapshot considerations
    • DB snapshot encrypted using an KMS encryption key can be copied
    • Copying an encrypted DB snapshot, results in an encrypted copy of the DB snapshot
    • When copying, DB snapshot can either be encrypted with the same KMS encryption key as the original DB snapshot, or a different KMS encryption key to encrypt the copy of the DB snapshot.
    • An unencrypted DB snapshot can be copied to an encrypted snapshot, a quick way to add encryption to a previously encrypted DB instance.
    • Encrypted snapshot can be restored only to an encrypted DB instance
    • If a KMS encryption key is specified when restoring from an unencrypted DB cluster snapshot, the restored DB cluster is encrypted using the specified KMS encryption key
    • Copying an encrypted snapshot shared from another AWS account, requires access to the KMS encryption key used to encrypt the DB snapshot.
    • Because KMS encryption keys are specific to the region that they are created in, encrypted snapshot cannot be copied to another region
  • Transparent Data Encryption (TDE)
    • Automatically encrypts the data before it is written to the underlying storage device and decrypts when it is read  from the storage device
    • is supported by Oracle and SQL Server
      • Oracle requires key storage outside of the KMS and integrates with CloudHSM for this
      • SQL Server requires a key but is managed by RDS

SSL to Encrypt a Connection to a DB Instance

  • Encrypt connections using SSL for data in transit between the applications and the DB instance
  • Amazon RDS creates an SSL certificate and installs the certificate on the DB instance when RDS provisions the instance.
  • SSL certificates are signed by a certificate authority. SSL certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificate to guard against spoofing attacks
  • While SSL offers security benefits, be aware that SSL encryption is a compute-intensive operation and will increase the latency of the database connection.

RDS Security Groups

  • Security groups control the access that traffic has in and out of a DB instance
  • VPC security groups act like a firewall controlling network access to your DB instance.
  • VPC security groups can be configured and associated with the DB instance to allow access from an IP address range, port, or EC2 security group
  • Database security groups default to a “deny all” access mode and customers must specifically authorize network ingress.

Master User Account Privileges

  • When you create a new DB instance, the default master user that used gets certain privileges for that DB instance
  • Subsequently, other users with permissions can be created

Event Notification

  • Event notifications can be configured for important events that occur on the DB instance
  • Notifications of a variety of important events that can occur on the RDS instance, such as whether the instance was shut down, a backup was started, a failover occurred, the security group was changed, or your storage space is low can be received

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Can I encrypt connections between my application and my DB Instance using SSL?
    1. No
    2. Yes
    3. Only in VPC
    4. Only in certain regions
  2. Which of these configuration or deployment practices is a security risk for RDS?
    1. Storing SQL function code in plaintext
    2. Non-Multi-AZ RDS instance
    3. Having RDS and EC2 instances exist in the same subnet
    4. RDS in a public subnet (Making RDS accessible to the public internet in a public subnet poses a security risk, by making your database directly addressable and spammable. DB instances deployed within a VPC can be configured to be accessible from the Internet or from EC2 instances outside the VPC. If a VPC security group specifies a port access such as TCP port 22, you would not be able to access the DB instance because the firewall for the DB instance provides access only via the IP addresses specified by the DB security groups the instance is a member of and the port defined when the DB instance was created. Refer link)

References

AWS_RDS_User_Guide – Security

 

AWS Blue Green Deployment – Certification

AWS Blue Green Deployment

  • Blue/green deployments provide near zero-downtime release and rollback capabilities.
  • Blue/green deployment works by shifting traffic between two identical environments that are running different versions of the application
    • Blue environment represents the current application version serving production traffic.
    • In parallel, the green environment is staged running a different version of your application.
    • After the green environment is ready and tested, production traffic is redirected from blue to green.
    • If any problems are identified, you can roll back by reverting traffic back to the blue environment.

NOTE: Advanced Topic required for DevOps Professional Exam Only

AWS Services

Route 53

  • Route 53 is a highly available and scalable authoritative DNS service that route user requests
  • Route 53 with its DNS service allows administrators to direct traffic by simply updating DNS records in the hosted zone
  • TTL can be adjusted for resource records to be shorter which allow record changes to propagate faster to clients

Elastic Load Balancing

  • Elastic Load Balancing distributes incoming application traffic across EC2 instances
  • Elastic Load Balancing scales in response to incoming requests, performs health checking against Amazon EC2 resources, and naturally integrates with other AWS tools, such as Auto Scaling.
  • ELB also helps perform health checks of EC2 instances to route traffic only to the healthy instances

Auto Scaling

  • Auto Scaling allows different versions of launch configuration, which define templates used to launch EC2 instances, to be attached to an Auto Scaling group to enable blue/green deployment.
  • Auto Scaling’s termination policies and Standby state enable blue/green deployment
    • Termination policies in Auto Scaling groups to determine which EC2 instances to remove during a scaling action.
    • Auto Scaling also allows instances to be placed in Standby state, instead of termination, which helps with quick rollback when required
  • Auto Scaling with Elastic Load Balancing can be used to balance and scale the traffic

Elastic Beanstalk

  • Elastic Beanstalk makes it easy to run multiple versions of the application and provides capabilities to swap the environment URLs, facilitating blue/green deployment.
  • Elastic Beanstalk supports Auto Scaling and Elastic Load Balancing, both of which enable blue/green deployment

OpsWorks

  • OpsWorks has the concept of stacks, which are logical groupings of AWS resources with a common purpose & should be logically managed together
  • Stacks are made of one or more layers with each layer represents a set of EC2 instances that serve a particular purpose, such as serving applications or hosting a database server.
  • OpsWorks simplifies cloning entire stacks when preparing for blue/green environments.

CloudFormation

  • CloudFormation helps describe the AWS resources through JSON formatted templates and provides automation capabilities for provisioning blue/green environments and facilitating updates to switch traffic, whether through Route 53 DNS, Elastic Load Balancing, etc
  • CloudFormation provides infrastructure as code strategy, where infrastructure is provisioned and managed using code and software development techniques, such as version control and continuous integration, in a manner similar to how application code is treated

CloudWatch

  • CloudWatch monitoring can provide early detection of application health in blue/green deployments

Deployment Techniques

DNS Routing using Route 53

  • Route 53 DNS service can help switch traffic from the blue environment to the green and vice versa, if rollback is necessary
  • Route 53 can help either switch the traffic completely or through a weighted distribution
  • Weighted distribution
    • helps distribute percentage of traffic to go to the green environment and gradually update the weights until the green environment carries the full production traffic
    • provides the ability to perform canary analysis where a small percentage of production traffic is introduced to a new environment
    • helps manage cost by using auto scaling for instances to scale based on the actual demand
  • Route 53 can handle Public or Elastic IP address, Elastic Load Balancer, Elastic Beanstalk environment web tiers etc.

Auto Scaling Group Swap Behind Elastic Load Balancer

AWS Blue Green Deployment - Auto Scaling Group

  • Elastic Load Balancing with Auto Scaling to manage EC2 resources as per the demand can be used for Blue Green deployments
  • Multiple Auto Scaling groups can be attached to the Elastic Load Balancer
  • Green ASG can be attached to an existing ELB while Blue ASG is already attached to the ELB to serve traffic
  • ELB would start routing requests to the Green Group as for HTTP/S listener it uses a least outstanding requests routing algorithm
  • Green group capacity can be increased to process more traffic while the Blue group capacity can be reduced either by terminating the instances or by putting the instances in a standby mode
  • Standby is a good option because if roll back to the blue environment needed, blue server instances can be put back in service and they’re ready to go
  • If no issues with the Green group, the blue group can be decommissioned by adjusting the group size to zero

Update Auto Scaling Group Launch Configurations

AWS Blue Green Deployment - Auto Scaling Launch

  • Auto Scaling groups have their own launch configurations which define template for EC2 instances to be launched
  • Auto Scaling group can have only one launch configuration at a time, and it can’t be modified. If needs modification, a new launch configuration can be created and attached to the existing Auto Scaling Group
  • After a new launch configuration is in place, any new instances that are launched use the new launch configuration parameters, but existing instances are not affected.
  • When Auto Scaling removes instances (referred to as scaling in) from the group, the default termination policy is to remove instances with the oldest launch configuration
  • To deploy the new version of the application in the green environment, update the Auto Scaling group with the new launch configuration, and then scale the Auto Scaling group to twice its original size.
  • Then, shrink the Auto Scaling group back to the original size
  • To perform a rollback, update the Auto Scaling group with the old launch configuration. Then, do the preceding steps in reverse

Elastic Beanstalk Application Environment Swap

AWS Blue Green Deployment - Elastic Beanstalk

  • Elastic Beanstalk multiple environment and environment url swap feature helps enable Blue Green deployment
  • Elastic Beanstalk can be used to host the blue environment exposed via URL to access the environment
  • Elastic Beanstalk provides several deployment policies, ranging from policies that perform an in-place update on existing instances, to immutable deployment using a set of new instances.
  • Elastic Beanstalk performs an in-place update when the application versions are updated, however application may become unavailable to users for a short period of time.
  • To avoid the downtime, a new version can be deployed to a separate Green environment with its own URL, launched with the existing environment’s configuration
  • Elastic Beanstalk’s Swap Environment URLs feature can be used to promote the green environment to serve production traffic
  • Elastic Beanstalk performs a DNS switch, which typically takes a few minutes
  • To perform a rollback, invoke Swap Environment URL again.

Clone a Stack in AWS OpsWorks and Update DNS

  • OpsWorks can be used to create
    • Blue environment stack with the current version of the application and serving production traffic
    • Green environment stack with the newer version of the application and is not receiving any traffic
  • To promote to the green environment/stack into production, update DNS records to point to the green environment/stack’s load balancer

AWS Blue Green deployment patterns

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What is server immutability?
    1. Not updating a server after creation. (During the new release, a new set of EC2 instances are rolled out by terminating older instances and are disposable. EC2 instance usage is considered temporary or ephemeral in nature for the period of deployment until the current release is active)
    2. The ability to change server counts.
    3. Updating a server after creation.
    4. The inability to change server counts.
  2. You need to deploy a new application version to production. Because the deployment is high-risk, you need to roll the new version out to users over a number of hours, to make sure everything is working correctly. You need to be able to control the proportion of users seeing the new version of the application down to the percentage point. You use ELB and EC2 with Auto Scaling Groups and custom AMIs with your code pre-installed assigned to Launch Configurations. There are no database-level changes during your deployment. You have been told you cannot spend too much money, so you must not increase the number of EC2 instances much at all during the deployment, but you also need to be able to switch back to the original version of code quickly if something goes wrong. What is the best way to meet these requirements?
    1. Create a second ELB, Auto Scaling Launch Configuration, and Auto Scaling Group using the Launch Configuration. Create AMIs with all code pre-installed. Assign the new AMI to the second Auto Scaling Launch Configuration. Use Route53 Weighted Round Robin Records to adjust the proportion of traffic hitting the two ELBs. (Use Weighted Round Robin DNS Records and reverse proxies allow such fine-grained tuning of traffic splits. Blue-Green option does not meet the requirement that we mitigate costs and keep overall EC2 fleet size consistent, so we must select the 2 ELB and ASG option with WRR DNS tuning)
    2. Use the Blue-Green deployment method to enable the fastest possible rollback if needed. Create a full second stack of instances and cut the DNS over to the new stack of instances, and change the DNS back if a rollback is needed. (Full second stack is expensive)
    3. Create AMIs with all code pre-installed. Assign the new AMI to the Auto Scaling Launch Configuration, to replace the old one. Gradually terminate instances running the old code (launched with the old Launch Configuration) and allow the new AMIs to boot to adjust the traffic balance to the new code. On rollback, reverse the process by doing the same thing, but changing the AMI on the Launch Config back to the original code. (Cannot modify the existing launch config)
    4. Migrate to use AWS Elastic Beanstalk. Use the established and well-tested Rolling Deployment setting AWS provides on the new Application Environment, publishing a zip bundle of the new code and adjusting the wait period to spread the deployment over time. Re-deploy the old code bundle to rollback if needed.
  3. When thinking of AWS Elastic Beanstalk, the ‘Swap Environment URLs’ feature most directly aids in what?
    1. Immutable Rolling Deployments
    2. Mutable Rolling Deployments
    3. Canary Deployments
    4. Blue-Green Deployments (Complete switch from one environment to other)
  4. You were just hired as a DevOps Engineer for a startup. Your startup uses AWS for 100% of their infrastructure. They currently have no automation at all for deployment, and they have had many failures while trying to deploy to production. The company has told you deployment process risk mitigation is the most important thing now, and you have a lot of budget for tools and AWS resources. Their stack: 2-tier API Data stored in DynamoDB or S3, depending on type, Compute layer is EC2 in Auto Scaling Groups, They use Route53 for DNS pointing to an ELB, An ELB balances load across the EC2 instances. The scaling group properly varies between 4 and 12 EC2 servers. Which of the following approaches, given this company’s stack and their priorities, best meets the company’s needs?
    1. Model the stack in AWS Elastic Beanstalk as a single Application with multiple Environments. Use Elastic Beanstalk’s Rolling Deploy option to progressively roll out application code changes when promoting across environments. (Does not support DynamoDB also need Blue Green deployment for zero downtime deployment as cost is not a constraint)
    2. Model the stack in 3 CloudFormation templates: Data layer, compute layer, and networking layer. Write stack deployment and integration testing automation following Blue-Green methodologies.
    3. Model the stack in AWS OpsWorks as a single Stack, with 1 compute layer and its associated ELB. Use Chef and App Deployments to automate Rolling Deployment. (Does not support DynamoDB also need Blue Green deployment for zero downtime deployment as cost is not a constraint)
    4. Model the stack in 1 CloudFormation template, to ensure consistency and dependency graph resolution. Write deployment and integration testing automation following Rolling Deployment methodologies. (Need Blue Green deployment for zero downtime deployment as cost is not a constraint)
  5. You are building out a layer in a software stack on AWS that needs to be able to scale out to react to increased demand as fast as possible. You are running the code on EC2 instances in an Auto Scaling Group behind an ELB. Which application code deployment method should you use?
    1. SSH into new instances those come online, and deploy new code onto the system by pulling it from an S3 bucket, which is populated by code that you refresh from source control on new pushes. (is slow and manual)
    2. Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration. (Pre baked AMIs can help to get started quickly)
    3. Create a Dockerfile when preparing to deploy a new version to production and publish it to S3. Use UserData in the Auto Scaling Launch configuration to pull down the Dockerfile from S3 and run it when new instances launch. (is slow)
    4. Create a new Auto Scaling Launch Configuration with UserData scripts configured to pull the latest code at all times. (is slow)
  6. You company runs a complex customer relations management system that consists of around 10 different software components all backed by the same Amazon Relational Database (RDS) database. You adopted AWS OpsWorks to simplify management and deployment of that application and created an AWS OpsWorks stack with layers for each of the individual components. An internal security policy requires that all instances should run on the latest Amazon Linux AMI and that instances must be replaced within one month after the latest Amazon Linux AMI has been released. AMI replacements should be done without incurring application downtime or capacity problems. You decide to write a script to be run as soon as a new Amazon Linux AMI is released. Which solutions support the security policy and meet your requirements? Choose 2 answers
    1. Assign a custom recipe to each layer, which replaces the underlying AMI. Use AWS OpsWorks life-cycle events to incrementally execute this custom recipe and update the instances with the new AMI.
    2. Create a new stack and layers with identical configuration, add instances with the latest Amazon Linux AMI specified as a custom AMI to the new layer, switch DNS to the new stack, and tear down the old stack. (Blue-Green Deployment)
    3. Identify all Amazon Elastic Compute Cloud (EC2) instances of your AWS OpsWorks stack, stop each instance, replace the AMI ID property with the ID of the latest Amazon Linux AMI ID, and restart the instance. To avoid downtime, make sure not more than one instance is stopped at the same time.
    4. Specify the latest Amazon Linux AMI as a custom AMI at the stack level, terminate instances of the stack and let AWS OpsWorks launch new instances with the new AMI.
    5. Add new instances with the latest Amazon Linux AMI specified as a custom AMI to all AWS OpsWorks layers of your stack, and terminate the old ones.
  7. Your company runs an event management SaaS application that uses Amazon EC2, Auto Scaling, Elastic Load Balancing, and Amazon RDS. Your software is installed on instances at first boot, using a tool such as Puppet or Chef, which you also use to deploy small software updates multiple times per week. After a major overhaul of your software, you roll out version 2.0 new, much larger version of the software of your running instances. Some of the instances are terminated during the update process. What actions could you take to prevent instances from being terminated in the future? (Choose two)
    1. Use the zero downtime feature of Elastic Beanstalk to deploy new software releases to your existing instances. (No such feature, you can perform environment url swap)
    2. Use AWS CodeDeploy. Create an application and a deployment targeting the Auto Scaling group. Use CodeDeploy to deploy and update the application in the future. (Refer link)
    3. Run “aws autoscaling suspend-processes” before updating your application. (Refer link)
    4. Use the AWS Console to enable termination protection for the current instances. (Termination protection does not work with Auto Scaling)
    5. Run “aws autoscaling detach-load-balancers” before updating your application. (Does not prevent Auto Scaling to terminate the instances)

References

AWS Blue/Green Deployment Whitepaper

AWS DynamoDB Secondary Indexes – Certification

AWS DynamoDB Secondary Indexes

  • DynamoDB provides fast access to items in a table by specifying primary key values
  • Secondary indexes on a table allow efficient access to data with attributes other than the primary key
  • Secondary index
    • is a data structure that contains a subset of attributes from a table
    • is associated with exactly one table, from which it obtains its data
    • requires an alternate key for the index partition key and sort key
    • additionally can define projected attributes which are copied from the base table into the index along with the primary key attributes
    • is automatically maintained by DynamoDB
    • any addition, modification, or deletion of items in the base table, any indexes on that table are also updated to reflect these changes.
  • DynamoDB supports two types of secondary indexes
    • Global secondary index – an index with a partition key and a sort key that can be different from those on the base table
    • Local secondary index – an index that has the same partition key as the base table, but a different sort key

Global Secondary Indexes

  • DynamoDB creates and maintains indexes for the primary key attributes for efficient access of data in the table, which allows applications to quickly retrieve data by specifying primary key values.
  • Global Secondary Indexes (GSI) are indexes that contain partition or composite partition-and-sort keys that can be different from the keys in the table on which the index is based.
  • Global secondary index is considered “global” because queries on the index can span all items in a table, across all partitions.
  • Multiple secondary indexes can be created on a table, and queries issued against these indexes.
  • Applications benefit from having one or more secondary keys available to allow efficient access to data with attributes other than the primary key.
  • GSIs support non-unique attributes, which increases query flexibility by enabling queries against any non-key attribute in the table
  • GSIs support non-unique attributes, which increases query flexibility by enabling queries against any non-key attribute in the table
  • GSIs support eventual consistency. DynamoDB automatically handles item additions, updates and deletes in a GSI when corresponding changes are made to the table asynchronously
  • Data in a secondary index consists of GSI alternate key, primary key and  attributes that are projected, or copied, from the table into the index.
  • Attributes that are part of an item in a table, but not part of the GSI key, primary key of the table, or projected attributes are not returned on querying the GSI index
  • GSIs manage throughput independently of the table they are based on and the provisioned throughput for the table and each associated GSI needs to be specified at creation time

Local Secondary Indexes

  • Local secondary index are indexes that has the same partition key as the table, but a different sort key.
  • Local secondary index is “local” cause every partition of a local secondary index is scoped to a table partition that has the same partition key.
  • LSI allows search using a secondary index in place of the sort key, thus expanding the number of attributes that can be used for queries which can be conducted efficiently
  • LSI are updated automatically when the primary index is updated and reads support both strong and eventually consistent options
  • LSIs can only be queried via the Query API
  • LSIs cannot be added to existing tables at this time
  • LSIs cannot be modified once it is created at this time
  • LSI cannot be removed from a table once they are created at this time
  • LSI consumes provisioned throughput capacity as part of the table with which it is associated

DynamoDB Secondary Indexes - GSI vs LSI

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. In DynamoDB, a secondary index is a data structure that contains a subset of attributes from a table, along with an alternate key to support ____ operations.
    1. None of the above
    2. Both
    3. Query
    4. Scan
  2. In regard to DynamoDB, what is the Global secondary index?
    1. An index with a hash and range key that can be different from those on the table
    2. An index that has the same range key as the table, but a different hash key
    3. An index that has the same hash key and range key as the table
    4. An index that has the same hash key as the table, but a different range key
  3. In regard to DynamoDB, can I modify the index once it is created?
    1. Yes, if it is a primary hash key index
    2. Yes, if it is a Global secondary index
    3. No
    4. Yes, if it is a local secondary index
  4. When thinking of DynamoDB, what are true of Global Secondary Key properties?
    1. The partition key and sort key can be different from the table.
    2. Only the partition key can be different from the table.
    3. Either the partition key or the sort key can be different from the table, but not both.
    4. Only the sort key can be different from the table.

References