AWS Elastic Beanstalk vs OpsWorks vs CloudFormation – Certification

AWS Elastic Beanstalk vs OpsWorks vs CloudFormation

AWS offers multiple options for provisioning IT infrastructure and application deployment and management varying from convenience & easy of setup with low level granular control
Deployment and Management - Elastic Beanstalk vs OpsWorks vs CloudFormation

AWS Elastic Beanstalk

  • AWS Elastic Beanstalk is a higher level service which allows you to quickly deploy out with minimum management effort a web or worker based environments using EC2, Docker using ECS, Elastic Load Balancing, Auto Scaling, RDS, CloudWatch etc.
  • Elastic Beanstalk is the fastest and simplest way to get an application up and running on AWS and perfect for developers who want to deploy code and not worry about underlying infrastructure
  • Elastic Beanstalk provides an environment to easily deploy and run applications in the cloud. It is integrated with developer tools and provides a one-stop experience for application lifecycle management
  • Elastic Beanstalk requires minimal configuration points and will help deploy, monitor and handle the elasticity/scalability of the application
  • A user does’t need to do much more than write application code and configure and define some configuration on Elastic Beanstalk

AWS OpsWorks

  • AWS OpsWorks is an application management service that simplifies software configuration, application deployment, scaling, and monitoring
  • OpsWorks is recommended if you want to manage your infrastructure with a configuration management system such as Chef.
  • Opsworks enables writing custom chef recipes, utilizes self healing, and works with layers
  • Although, Opsworks is deployment management service that helps you deploy applications with Chef recipes, but it is not primally meant to manage the scaling of the application out of the box, and needs to be handled explicitly

AWS CloudFormation

  • AWS CloudFormation enables modeling, provisioning and version-controlling of a wide range of AWS resources ranging from a single EC2 instance to a complex multi-tier, multi-region application
  • CloudFormation is a low level service and provides granular control to provision and manage stacks of AWS resources based on templates
  • CloudFormation templates enables version control of the infrastructure and makes deployment of environments easy and repeatable
  • CloudFormation supports infrastructure needs of many different types of applications such as existing enterprise applications, legacy applications, applications built using a variety of AWS resources and container-based solutions (including those built using AWS Elastic Beanstalk).
  • CloudFormation is not just an application deployment tool but can provision any kind of AWS resource
  • CloudFormation is designed to complement both Elastic Beanstalk and OpsWorks
  • CloudFormation with Elastic Beanstalk
    • CloudFormation supports Elastic Beanstalk application environments as one of the AWS resource types.
    • This allows you, for example, to create and manage an AWS Elastic Beanstalk–hosted application along with an RDS database to store the application data. In addition to RDS instances, any other supported AWS resource can be added to the group as well.
  • CloudFormation with OpsWorks
    • CloudFormation also supports OpsWorks and OpsWorks components (stacks, layers, instances, and applications) can be modeled inside CloudFormation templates, and provisioned as CloudFormation stacks.
    • This enables you to document, version control, and share your OpsWorks configuration.
    • Unified CloudFormation template or separate CloudFormation templates can be created to provision OpsWorks components and other related AWS resources such as VPC and Elastic Load Balancer

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your team is excited about the use of AWS because now they have access to programmable infrastructure. You have been asked to manage your AWS infrastructure in a manner similar to the way you might manage application code. You want to be able to deploy exact copies of different versions of your infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time (development test QA. production). Which approach addresses this requirement?
    1. Use cost allocation reports and AWS Opsworks to deploy and manage your infrastructure.
    2. Use AWS CloudWatch metrics and alerts along with resource tagging to deploy and manage your infrastructure.
    3. Use AWS Elastic Beanstalk and a version control system like GIT to deploy and manage your infrastructure.
    4. Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure.
  2. An organization is planning to use AWS for their production roll out. The organization wants to implement automation for deployment such that it will automatically create a LAMP stack, download the latest PHP installable from S3 and setup the ELB. Which of the below mentioned AWS services meets the requirement for making an orderly deployment of the software?
    1. AWS Elastic Beanstalk
    2. AWS CloudFront
    3. AWS CloudFormation
    4. AWS DevOps
  3. You are working with a customer who is using Chef configuration management in their data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?
    1. Amazon Simple Workflow Service
    2. AWS Elastic Beanstalk
    3. AWS CloudFormation
    4. AWS OpsWorks

References

AWS High Availability & Fault Tolerance Architecture – Certification

AWS High Availability & Fault Tolerance Architecture

  • Amazon Web Services provides services and infrastructure to build reliable, fault-tolerant, and highly available systems in the cloud.
  • Fault-tolerance defines the ability for a system to remain in operation even if some of the components used to build the system fail.
  • Most of the higher-level services, such as S3, SimpleDB, SQS, and ELB, have been built with fault tolerance and high availability in mind.
  • Services that provide basic infrastructure, such as EC2 and EBS, provide specific features, such as availability zones, elastic IP addresses, and snapshots, that a fault-tolerant and highly available system must take advantage of and use correctly.

AWS High Availability and Fault Tolerance

NOTE: Topic mainly for Professional Exam Only

Regions & Availability Zones

  • Amazon Web Services are available in geographic Regions and with multiple Availability zones (AZs) within a region, which provide easy access to redundant deployment locations.
  • AZs are distinct geographical locations that are engineered to be insulated from failures in other AZs.
  • Regions and AZs help achieve greater fault tolerance by distributing the application geographically and help build multi-site solution.
  • AZs provide inexpensive, low latency network connectivity to other Availability Zones in the same Region
  • By placing EC2 instances in multiple AZs, an application can be protected from failure at a single data center
  • It is important to run independent application stacks in more than one AZ, either in the same region or in another region, so that if one zone fails, the application in the other zone can continue to run.

Amazon Machine Image – AMIs

  • EC2 is a web service within Amazon Web Services that provides computing resources.
  • Amazon Machine Image (AMI) provides a Template that can be used to define the service instances.
  • Template basically contains a software configuration (i.e., OS, application server, and applications) and is applied to an instance type
  • AMI can either contain all the softwares, applications and the code bundled or can be configured to have a bootstrap script to install the same on startup.
  • A single AMI can be used to create server resources of different instance types and start creating new instances or replacing failed instances

Auto Scaling

  • Auto Scaling helps to automatically scale EC2 capacity up or down based on defined rules.
  • Auto Scaling also enables addition of more instances in response to an increasing load; and when those instances are no longer needed, they will be automatically terminated.
  • Auto Scaling enables terminating server instances at will, knowing that replacement instances will be automatically launched.
  • Auto Scaling can work across multiple AZs within an AWS Region

Elastic Load Balancing – ELB

  • Elastic Load balancing is an effective way to increase the availability of a system and distributes incoming traffic to application across several EC2 instances
  • With ELB, a DNS host name is created and any requests sent to this host name are delegated to a pool of EC2 instances
  • ELB supports health checks on hosts, distribution of traffic to EC2 instances across multiple availability zones, and dynamic addition and removal of EC2 hosts from the load-balancing rotation
  • Elastic Load Balancing detects unhealthy instances within its pool of EC2 instances and automatically reroutes traffic to healthy instances, until the unhealthy instances have been restored seamlessly using Auto Scaling.
  • Auto Scaling and Elastic Load Balancing are an ideal combination – while ELB gives a single DNS name for addressing, Auto Scaling ensures there is always the right number of healthy EC2 instances to accept requests.
  • ELB can be used to balance across instances in multiple AZs of a region.

Elastic IPs – EIPs

  • Elastic IP addresses are public static IP addresses that can be mapped programmatically between instances within a region.
  • EIPs associated with the AWS account and not with a specific instance or lifetime of an instance.
  • Elastic IP addresses can be used for instances and services that require consistent endpoints, such as, master databases, central file servers, and EC2-hosted load balancers
  • Elastic IP addresses can be used to work around host or availability zone failures by quickly remapping the address to another running instance or a replacement instance that was just started.

Reserved Instance

  • Reserved instances help reserve and guarantee computing capacity is available at a lower cost always.

Elastic Block Store – EBS

  • Elastic Block Store (EBS) offers persistent off-instance storage volumes that persists independently from the life of an instance and are about an order of magnitude more durable than on-instance storage.
  • EBS volumes store data redundantly and are automatically replicated within a single availability zone.
  • EBS helps in failover scenarios where if an EC2 instance fails and needs to be replaced, the EBS volume can be attached to the new EC2 instance
  • Valuable data should never be stored only on instance (ephemeral) storage without proper backups, replication, or the ability to re-create the data.

EBS Snapshots

  • EBS volumes are highly reliable, but to further mitigate the possibility of a failure and increase durability, point-in-time Snapshots can be created to store data on volumes in S3, which is then replicated to multiple AZs.
  • Snapshots can be used to create new EBS volumes, which are an exact replica of the original volume at the time the snapshot was taken
  • Snapshots provide an effective way to deal with disk failures or other host-level issues, as well as with problems affecting an AZ.
  • Snapshots are incremental and back up only changes since the previous snapshot, so it is advisable to hold on to recent snapshots
  • Snapshots are tied to the region, while EBS volumes are tied to a single AZ

Relational Database Service – RDS

    • RDS makes it easy to run relational databases in the cloud
    • RDS Multi-AZ deployments, where a synchronous standby replica of the database is provisioned in a different AZ, which helps increase the database availability and protect the database against unplanned outages
    • In case of a failover scenario, the standby is promoted to be the primary seamlessly and will handle the database operations.
    • Automated backups, enabled by default, of the database provides point-in-time recovery for the database instance.
    • RDS will back up your database and transaction logs and store both for a user-specified retention period.
    • In addition to the automated backups, manual RDS backups can also be performed which are retained until explicitly deleted.
    • Backups help recover from higher-level faults such as unintentional data modification, either by operator error or by bugs in the application.
    • RDS Read Replicas provide read-only replicas of the database an provides the ability to scale out beyond the capacity of a single database deployment for read-heavy database workloads
  • RDS Read Replicas is a scalability and not a High Availability solution

Simple Storage Service – S3

  • S3 provides highly durable, fault-tolerant and redundant object store
  • S3 stores objects redundantly on multiple devices across multiple facilities in an S3 Region
  • S3 is a great storage solution for somewhat static or slow-changing objects, such as images, videos, and other static media.
  • S3 also supports edge caching and streaming of these assets by interacting with the Amazon CloudFront service.

Simple Queue Service – SQS

  • Simple Queue Service (SQS) is a highly reliable distributed messaging system that can serve as the backbone of fault-tolerant application
  • SQS is engineered to provide “at least once” delivery of all messages
  • Messages are guaranteed for sent to a queue are retained for up to four days( by default, and can be extended upto 14 days)  or until they are read and deleted by the application
  • Messages can be polled by multiple workers and processed, while SQS takes care that a request is processed by only one worker at a time using configurable time interval called visibility timeout
  • If the number of messages in a queue starts to grow or if the average time to process a message becomes too high, workers can be scaled upwards by simply adding additional EC2 instances.

Route 53

    • Amazon Route 53 is a highly available and scalable DNS web service.
    • Queries for the domain are automatically routed to the nearest DNS server and thus are answered with the best possible performance.
  • Route 53 resolves requests for your domain name (for example, www.example.com) to your Elastic Load Balancer, as well as your zone apex record (example.com).

CloudFront

    • CloudFront can be used to deliver website, including dynamic, static and streaming content using a global network of edge locations.
    • Requests for your content are automatically routed to the nearest edge location, so content is delivered with the best possible performance.
    • CloudFront is optimized to work with other Amazon Web Services, like S3 and EC2
  • CloudFront also works seamlessly with any non-AWS origin server, which stores the original, definitive versions of your files.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are moving an existing traditional system to AWS, and during the migration discover that there is a master server which is a single point of failure. Having examined the implementation of the master server you realize there is not enough time during migration to re-engineer it to be highly available, though you do discover that it stores its state in a local MySQL database. In order to minimize down-time you select RDS to replace the local database and configure master to use it, what steps would best allow you to create a self-healing architecture[PROFESSIONAL]
    1. Migrate the local database into multi-AWS RDS database. Place master node into a multi-AZ auto-scaling group with a minimum of one and maximum of one with health checks.
    2. Replicate the local database into a RDS read replica. Place master node into a Cross-Zone ELB with a minimum of one and maximum of one with health checks. (Read Replica does not provide HA and write capability and ELB does not have feature for Min and Max 1 and Cross Zone allows just the equal distribution of load across instances)
    3. Migrate the local database into multi-AWS RDS database. Place master node into a Cross-Zone ELB with a minimum of one and maximum of one with health checks. (ELB does not have feature for Min and Max 1 and Cross Zone allows just the equal distribution of load across instances)
    4. Replicate the local database into a RDS read replica. Place master node into a multi-AZ auto-scaling group with a minimum of one and maximum of one with health checks. (Read Replica does not provide HA and write capability)
  2. You are designing Internet connectivity for your VPC. The Web servers must be available on the Internet. The application must have a highly available architecture. Which alternatives should you consider? (Choose 2 answers)
    1. Configure a NAT instance in your VPC. Create a default route via the NAT instance and associate it with all subnets. Configure a DNS A record that points to the NAT instance public IP address (NAT is for internet connectivity for instances in private subnet)
    2. Configure a CloudFront distribution and configure the origin to point to the private IP addresses of your Web servers. Configure a Route53 CNAME record to your CloudFront distribution.
    3. Place all your web servers behind ELB. Configure a Route53 CNAME to point to the ELB DNS name.
    4. Assign EIPs to all web servers. Configure a Route53 record set with all EIPs. With health checks and DNS failover.
  3. When deploying a highly available 2-tier web application on AWS, which combination of AWS services meets the requirements? 1. AWS Direct Connect 2. Amazon Route 53 3. AWS Storage Gateway 4. Elastic Load Balancing 4. Amazon EC2 5. Auto scaling 6. Amazon VPC 7. AWS Cloud Trail [PROFESSIONAL]
    1. 2,4,5 and 6
    2. 3,4,5 and 8
    3. 1 through 8
    4. 1,3,5 and 7
    5. 1,2,5 and 6
  4. Company A has hired you to assist with the migration of an interactive website that allows registered users to rate local restaurants. Updates to the ratings are displayed on the home page, and ratings are updated in real time. Although the website is not very popular today, the company anticipates that It will grow rapidly over the next few weeks. They want the site to be highly available. The current architecture consists of a single Windows Server 2008 R2 web server and a MySQL database running on Linux. Both reside inside an on -premises hypervisor. What would be the most efficient way to transfer the application to AWS, ensuring performance and high-availability? [PROFESSIONAL]
    1. Export web files to an Amazon S3 bucket in us-west-1. Run the website directly out of Amazon S3. Launch a multi-AZ MySQL Amazon RDS instance in us-west-1a. Import the data into Amazon RDS from the latest MySQL backup. Use Route 53 and create an alias record pointing to the elastic load balancer. (Its an Interactive website, although it can be implemented using Javascript SDK, its a migration and the application would need changes. Also no use of ELB if hosted on S3)
    2. Launch two Windows Server 2008 R2 instances in us-west-1b and two in us-west-1a. Copy the web files from on premises web server to each Amazon EC2 web server, using Amazon S3 as the repository. Launch a multi-AZ MySQL Amazon RDS instance in us-west-2a. Import the data into Amazon RDS from the latest MySQL backup. Create an elastic load balancer to front your web servers. Use Route 53 and create an alias record pointing to the elastic load balancer. (Although RDS instance is in a different region which will impact performance, this is the only option that works.)
    3. Use AWS VM Import/Export to create an Amazon Elastic Compute Cloud (EC2) Amazon Machine Image (AMI) of the web server. Configure Auto Scaling to launch two web servers in us-west-1a and two in us-west-1b. Launch a Multi-AZ MySQL Amazon Relational Database Service (RDS) instance in us-west-1b. Import the data into Amazon RDS from the latest MySQL backup. Use Amazon Route 53 to create a hosted zone and point an A record to the elastic load balancer. (does not create a load balancer)
    4. Use AWS VM Import/Export to create an Amazon EC2 AMI of the web server. Configure auto-scaling to launch two web servers in us-west-1a and two in us-west-1b. Launch a multi-AZ MySQL Amazon RDS instance in us-west-1a. Import the data into Amazon RDS from the latest MySQL backup. Create an elastic load balancer to front your web servers. Use Amazon Route 53 and create an A record pointing to the elastic load balancer. (Need to create a aliased record without which the Route 53 pointing to ELB would not work)
  5. Your company runs a customer facing event registration site. This site is built with a 3-tier architecture with web and application tier servers and a MySQL database. The application requires 6 web tier servers and 6 application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single MySQL database. When deploying this application in a region with three availability zones (AZs) which architecture provides high availability? [PROFESSIONAL]
    1. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. and one RDS (Relational Database Service) instance deployed with read replicas in the other AZ.
    2. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) Instance deployed with read replicas in the two other AZs.
    3. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2 instances m each AZ inside an Auto Scaling Group behind an ELS and a Multi-AZ RDS (Relational Database Service) deployment.
    4. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. And a Multi-AZ RDS (Relational Database services) deployment.
  6. For a 3-tier, customer facing, inclement weather site utilizing a MySQL database running in a Region which has two AZs which architecture provides fault tolerance within the region for the application that minimally requires 6 web tier servers and 6 application tier servers running in the web and application tiers and one MySQL database? [PROFESSIONAL]
    1. A web tier deployed across 2 AZs with 6 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 6 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. and a Multi-AZ RDS (Relational Database Service) deployment. (As it needs Fault Tolerance with minimal 6 servers always available)
    2. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each A2 inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and a Multi-AZ RDS (Relational Database Service) deployment.
    3. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 6 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) Instance deployed with read replicas in the other AZs.
    4. A web tier deployed across 1 AZs with 6 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed in the same AZs with 6 EC2 instances inside an Auto scaling group behind an ELB and a Multi-AZ RDS (Relational Database services) deployment, with 6 stopped web tier EC2 instances and 6 stopped application tier EC2 instances all in the other AZ ready to be started if any of the running instances in the first AZ fails.
  7. You are designing a system which needs, at minimum, 8 m4.large instances operating to service traffic. When designing a system for high availability in the us-east-1 region, which has 6 Availability Zones, you company needs to be able to handle death of a full availability zone. How should you distribute the servers, to save as much cost as possible, assuming all of the EC2 nodes are properly linked to an ELB? Your VPC account can utilize us-east-1’s AZ’s a through f, inclusive.
    1. 3 servers in each of AZ’s a through d, inclusive.
    2. 8 servers in each of AZ’s a and b.
    3. 2 servers in each of AZ’s a through e, inclusive. (You need to design for N+1 redundancy on Availability Zones. ZONE_COUNT = (REQUIRED_INSTANCES / INSTANCE_COUNT_PER_ZONE) + 1. To minimize cost, spread the instances across as many possible zones as you can. By using a though e, you are allocating 5 zones. Using 2 instances, you have 10 total instances. If a single zone fails, you have 4 zones left, with 2 instances each, for a total of 8 instances. By spreading out as much as possible, you have increased cost by only 25% and significantly de-risked an availability zone failure. Refer link)
    4. 4 servers in each of AZ’s a through c, inclusive.
  8. You need your API backed by DynamoDB to stay online during a total regional AWS failure. You can tolerate a couple minutes of lag or slowness during a large failure event, but the system should recover with normal operation after those few minutes. What is a good approach? [PROFESSIONAL]
    1. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create an Auto Scaling Group behind an ELB in each of the two regions DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records. (Use DynamoDB cross-regional replication version with two ELBs and ASGs with Route53 Failover and Latency DNS. Refer link)
    2. Set up a DynamoDB Multi-Region table. Create an Auto Scaling Group behind an ELB in each of the two regions DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records. (No such thing as DynamoDB Multi-Region table before. However, global tables have been now introduced.)
    3. Set up a DynamoDB Multi-Region table. Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB. (No such thing as Cross Region ELB or cross-region ASG)
    4. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB. (No such thing as DynamoDB cross-region table or cross-region ELB)
  9. You are putting together a WordPress site for a local charity and you are using a combination of Route53, Elastic Load Balancers, EC2 & RDS. You launch your EC2 instance, download WordPress and setup the configuration files connection string so that it can communicate to RDS. When you browse to your URL however, nothing happens. Which of the following could NOT be the cause of this.
    1. You have forgotten to open port 80/443 on your security group in which the EC2 instance is placed.
    2. Your elastic load balancer has a health check, which is checking a webpage that does not exist; therefore your EC2 instance is not in service.
    3. You have not configured an ALIAS for your A record to point to your elastic load balancer
    4. You have locked port 22 down to your specific IP address therefore users cannot access your site using HTTP/HTTPS
  10. A development team that is currently doing a nightly six-hour build which is lengthening over time on-premises with a large and mostly under utilized server would like to transition to a continuous integration model of development on AWS with multiple builds triggered within the same day. However, they are concerned about cost, security and how to integrate with existing on-premises applications such as their LDAP and email servers, which cannot move off-premises. The development environment needs a source code repository; a project management system with a MySQL database resources for performing the builds and a storage location for QA to pick up builds from. What AWS services combination would you recommend to meet the development team’s requirements? [PROFESSIONAL]
    1. A Bastion host Amazon EC2 instance running a VPN server for access from on-premises, Amazon EC2 for the source code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIP for the source code repository and project management system, Amazon SQL for a build queue, An Amazon Auto Scaling group of Amazon EC2 instances for performing builds and Amazon Simple Email Service for sending the build output. (Bastion is not for VPN connectivity also SES should not be used)
    2. An AWS Storage Gateway for connecting on-premises software applications with cloud-based storage securely, Amazon EC2 for the resource code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, Amazon Simple Notification Service for a notification initiated build, An Auto Scaling group of Amazon EC2 instances for performing builds and Amazon S3 for the build output. (Storage Gateway does not provide secure connectivity, still needs VPN. SNS alone cannot handle builds)
    3. An AWS Storage Gateway for connecting on-premises software applications with cloud-based storage securely, Amazon EC2 for the resource code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, Amazon SQS for a build queue, An Amazon Elastic Map Reduce (EMR) cluster of Amazon EC2 instances for performing builds and Amazon CloudFront for the build output. (Storage Gateway does not provide secure connectivity, still needs VPN. EMR is not ideal for performing builds as it needs normal EC2 instances)
    4. A VPC with a VPN Gateway back to their on-premises servers, Amazon EC2 for the source-code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, SQS for a build queue, An Auto Scaling group of EC2 instances for performing builds and S3 for the build output. (VPN gateway is required for secure connectivity. SQS for build queue and EC2 for builds)

References

AWS CloudFormation

AWS CloudFormation

  • AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provision and update them in an orderly and predictable fashion
  • CloudFormation consists of
    • Template
      • is an architectural diagram
      • a JSON or YAML-format, text-based file that describes all the AWS resources you need to deploy to run your application
    • Stack
      • is the end result of that diagram, which is actually provisioned
      • is the set of AWS resources that are created and managed as a single unit when CloudFormation instantiates a template.
  • CloudFormation template can be used to set up the resources consistently and repeatedly over and over across multiple regions
  • Resources can be updated, deleted, and modified in a controlled and predictable way, in effect applying version control to the infrastructure as done for software code
  • AWS CloudFormation Template consists of elements:-
    • List of AWS resources and their configuration values
    • An optional template file format version number
    • An optional list of template parameters (input values supplied at stack creation time)
    • An optional list of output values like public IP address using the Fn:GetAtt function
    • An optional list of data tables used to lookup static configuration values for e.g., AMI names per AZ
  • CloudFormation supports Chef & Puppet Integration to deploy and configure right down the application layer
  • CloudFormation provides a set of application bootstrapping scripts that enable you to install packages, files, and services on the EC2 instances by simply describing them in the CloudFormation template
  • By default, automatic rollback on error feature is enabled, which will cause all the AWS resources that CloudFormation created successfully for a stack up to the point where an error occurred to be deleted.
  • In case of automatic rollback, charges would still be applied for the resources, the time they were up and running
  • CloudFormation provides a WaitCondition resource that acts as a barrier, blocking the creation of other resources until a completion signal is received from an external source e.g. application or management system
  • CloudFormation allows deletion policies to be defined for resources in the template for e.g. resources to be retained or snapshots can be created before deletion useful for preserving S3 buckets when the stack is deleted

Required Mainly for Developer, SysOps Associate & DevOps Professional Exam

AWS CloudFormation Concepts

AWS CloudFormation, you work with templates and stacks

Templates

  • act as blueprints for building AWS resources.
  • is a JSON or YAML formatted text file, saved with any extension, such as .json, .yaml, .template, or .txt.
  • have additional capabilities to build complex sets of resources and reuse those templates in multiple contexts for e.g. using input parameters to create generic and reusable templates
  • Name used for a resource within the template is a logical name but when CloudFormation creates the resource, it generates a physical name that is based on the combination of the logical name, the stack name, and a unique ID

Stacks

  • Stacks manage related resources as a single unit,
  • Collection of resources can be created, updated, and deleted by creating, updating, and deleting stacks.
  • All the resources in a stack are defined by the stack’s AWS CloudFormation template
  • CloudFormation makes underlying service calls to AWS to provision and configure the resources in the stack and can perform only actions that the users have permission to do.

Change Sets

  • Change Sets presents a summary or preview of the proposed changes that CloudFormation will make when a stack is updated
  • Change Sets help check how the changes might impact running resources, especially critical resources, before implementing them
  • CloudFormation makes the changes to the stack only when the change set is executed, allowing you to decide whether to proceed with the proposed changes or explore other changes by creating another change set
  • Change sets don’t indicate whether AWS CloudFormation will successfully update a stack for e.g. if account limits are hit or the user does not have permission

CloudFormation Change Sets

Nested Stacks

  • Nested stacks are stacks created as part of other stacks.
  • A nested stack can be created within another stack by using the AWS::CloudFormation::Stack resource.
  • Nested stacks can be used to define common, repeated patterns and components and create dedicated templates which then can be called from other stacks.
  • Nested stacks can themselves contain other nested stacks, resulting in a hierarchy of stacks. The root stack is the top-level stack to which all the nested stacks ultimately belong.
  • In addition, each nested stack has an immediate parent stack. For the first level of nested stacks, the root stack is also the parent stack.
  • Certain stack operations, such as stack updates, should be initiated from the root stack rather than performed directly on nested stacks themselves.

Drift Detection

  • Drift detection enables you to detect whether a stack’s actual configuration differs, or has drifted, from its expected configuration.
  • Drift detection help identify stack resources to which configuration changes have been made outside of CloudFormation management
  • Drift detection can detect drift on an entire stack or individual resources
  • Corrective action can be taken to make sure the stack resources are again in sync with the definitions in the stack template, such as updating the drifted resources directly so that they agree with their template definition
  • Resolving drift helps to ensure configuration consistency and successful stack operations.
  • CloudFormation detects drift on those AWS resources that support drift detection. Resources that don’t support drift detection are assigned a drift status of NOT_CHECKED.
  • Drift detection can be performed on stacks with the following statuses: CREATE_COMPLETEUPDATE_COMPLETEUPDATE_ROLLBACK_COMPLETE, and UPDATE_ROLLBACK_FAILED.
  • CloudFormation does not detect drift on any nested stacks that belong to that stack. Instead, you can initiate a drift detection operation directly on the nested stack.

CloudFormation Template Anatomy

  • Resources (required)
    • Specifies the stack resources and their properties, such as an EC2 instance or an S3 bucket that would be created
    • Resources can be referred to in the Resources and Outputs sections
  • Parameters (optional)
    • Pass values to the template at runtime (during stack creation or update)
    • Parameters can be referred from the Resources and Outputs sections
    • Can be referred using Fn::Ref or !Ref
  • Mappings (optional)
    • A mapping of keys and associated values that you can use to specify conditional parameter values, similar to a lookup table.
    • Can be referred using Fn::FindInMap or !FindInMap
  • Outputs (optional)
    • Describes the values that are returned whenever you view your stack’s properties.
  • Format Version (optional)
    • AWS CloudFormation template version that the template conforms to.
  • Description (optional)
    • A text string that describes the template. This section must always follow the template format version section.
  • Metadata (optional)
    • Objects that provide additional information about the template.
  • Rules (optional)
    • Validates a parameter or a combination of parameters passed to a template during stack creation or stack update.
  • Conditions (optional)
    • Conditions that control whether certain resources are created or whether certain resource properties are assigned a value during stack creation or update.
  • Transform (optional)
    • For serverless applications (also referred to as Lambda-based applications), specifies the version of the AWS Serverless Application Model (AWS SAM) to use.
    • When you specify a transform, you can use AWS SAM syntax to declare resources in the template. The model defines the syntax that you can use and how it’s processed.

CloudFormation Template Sample

CloudFormation Access Control

  • IAM
    • IAM can be applied with CloudFormation to access control for users whether they can view stack templates, create stacks, or delete stacks
    • IAM permissions need to be provided for the user to the AWS services and resources provisioned when the stack is created
    • Before a stack is created, AWS CloudFormation validates the template to check for IAM resources that it might create
  • Service Role
    • A service role is an AWS IAM role that allows AWS CloudFormation to make calls to resources in a stack on the user’s behalf
    • By default, AWS CloudFormation uses a temporary session that it generates from the user credentials for stack operations.
    • For a service role, AWS CloudFormation uses the role’s credentials.
    • When a service role is specified, AWS CloudFormation always uses that role for all operations that are performed on that stack.

Template Resource Attributes

  • CreationPolicy Attribute
    • is invoked during the associated resource creation
    • can be associated with a resource to prevent its status from reaching create complete until AWS CloudFormation receives a specified number of success signals or the timeout period is exceeded
    • helps to wait on resource configuration actions before stack creation proceeds for e.g. software installation on an EC2 instance
  • DeletionPolicy Attribute
    • preserve or (in some cases) backup a resource when its stack is deleted
    • By default, if a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource
    • To keep a resource when its stack is deleted,
      • default, Delete where the resources would be deleted
      • specify Retain for that resource, to prevent deletion
      • specify Snapshot to create a snapshot before deleting the resource, if the snapshot capability is supported for e.g RDS, EC2 volume, etc.
  • DependsOn Attribute
    • helps specify that the creation of a specific resource follows another
    • the resource is created only after the creation of the resource specified in the DependsOn attribute
  • Metadata Attribute
    • enables association of structured data with a resource
  • UpdatePolicy Attribute
    • Defines how AWS CloudFormation handles updates to the resources
    • For AWS::AutoScaling::AutoScalingGroup resources, CloudFormation invokes one of three update policies depending on the type of change or whether a scheduled action is associated with the Auto Scaling group.
      • The AutoScalingReplacingUpdate and AutoScalingRollingUpdate policies apply only when you do one or more of the following:
        • Change the Auto Scaling group’s AWS::AutoScaling::LaunchConfiguration
        • Change the Auto Scaling group’s VPCZoneIdentifier property
        • Change the Auto Scaling group’s LaunchTemplate property
        • Update an Auto Scaling group that contains instances that don’t match the current LaunchConfiguration.
      • The AutoScalingScheduledAction policy applies when you update a stack that includes an Auto Scaling group with an associated scheduled action.
    • For AWS::Lambda::Alias resources, CloudFormation performs a CodeDeploy deployment when the version changes on the alias.

CloudFormation Termination Protection

  • Termination protection helps prevent a stack from being accidentally deleted.
  • Termination protection on stacks is disabled by default.
  • Termination protection can be enabled on a stack creation
  • Termination protection can be set on a stack with any status except DELETE_IN_PROGRESS or DELETE_COMPLETE
  • Enabling or disabling termination protection on a stack sets it for any nested stacks belonging to that stack as well. You can’t enable or disable termination protection directly on a nested stack.
  • If a user attempts to directly delete a nested stack belonging to a stack that has termination protection enabled, the operation fails and the nested stack remains unchanged.
  • If a user performs a stack update that would delete the nested stack, AWS CloudFormation deletes the nested stack accordingly.

CloudFormation Stack Policy

  • Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update.
  • By default, all update actions are allowed on all resources and anyone with stack update permissions can update all of the resources in the stack.
  • During an update, some resources might require an interruption or be completely replaced, resulting in new physical IDs or completely new storage and hence need to be prevented.
  • A stack policy is a JSON document that defines the update actions that can be performed on designated resources.
  • After you set a stack policy, all of the resources in the stack are protected by default.
  • Updates on specific resources can be added using an explicit Allow statement for those resources in the stack policy.
  • Only one stack policy can be defined per stack, but multiple resources can be protected within a single policy.
  • A stack policy applies to all CloudFormation users who attempt to update the stack. You can’t associate different stack policies with different users
  • A stack policy applies only during stack updates. It doesn’t provide access controls like an IAM policy.

CloudFormation StackSets

  • CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and Regions with a single operation.
  • Using an administrator account, an AWS CloudFormation template can be defined, managed, and used as the basis for provisioning stacks into selected target accounts across specified AWS Regions.

CloudFormation StackSets

CloudFormation Helper Scripts

Refer Blog Post @ CloudFormation Helper Scripts

AWS CloudFormation Best Practices

Refer Blog Post @ CloudFormation Best Practices

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What does Amazon CloudFormation provide?
    1. The ability to setup Autoscaling for Amazon EC2 instances.
    2. A templated resource creation for Amazon Web Services.
    3. A template to map network resources for Amazon Web Services
    4. None of these
  2. A user is planning to use AWS CloudFormation for his automatic deployment requirements. Which of the below mentioned components are required as a part of the template?
    1. Parameters
    2. Outputs
    3. Template version
    4. Resources
  3. In regard to AWS CloudFormation, what is a stack?
    1. Set of AWS templates that are created and managed as a template
    2. Set of AWS resources that are created and managed as a template
    3. Set of AWS resources that are created and managed as a single unit
    4. Set of AWS templates that are created and managed as a single unit
  4. A large enterprise wants to adopt CloudFormation to automate administrative tasks and implement the security principles of least privilege and separation of duties. They have identified the following roles with the corresponding tasks in the company: (i) network administrators: create, modify and delete VPCs, subnets, NACLs, routing tables, and security groups (ii) application operators: deploy complete application stacks (ELB, Auto -Scaling groups, RDS) whereas all resources must be deployed in the VPCs managed by the network administrators (iii) Both groups must maintain their own CloudFormation templates and should be able to create, update and delete only their own CloudFormation stacks. The company has followed your advice to create two IAM groups, one for applications and one for networks. Both IAM groups are attached to IAM policies that grant rights to perform the necessary task of each group as well as the creation, update and deletion of CloudFormation stacks. Given setup and requirements, which statements represent valid design considerations? Choose 2 answers [PROFESSIONAL]
    1. Network stack updates will fail upon attempts to delete a subnet with EC2 instances (Subnets cannot be deleted with instances in them)
    2. Unless resource level permissions are used on the CloudFormation: DeleteStack action, network administrators could tear down application stacks (Network administrators themselves need permission to delete resources within the application stack & CloudFormation makes calls to create, modify, and delete those resources on their behalf)
    3. The application stack cannot be deleted before all network stacks are deleted (Application stack can be deleted before network stack)
    4. Restricting the launch of EC2 instances into VPCs requires resource level permissions in the IAM policy of the application group (IAM permissions need to be given explicitly to launch instances )
    5. Nesting network stacks within application stacks simplifies management and debugging, but requires resource level permissions in the IAM policy of the network group (Although stacks can be nested, Network group will need to have all the application group permissions)
  5. Your team is excited about the use of AWS because now they have access to programmable infrastructure. You have been asked to manage your AWS infrastructure in a manner similar to the way you might manage application code. You want to be able to deploy exact copies of different versions of your infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time (development, test, QA, production). Which approach addresses this requirement?
    1. Use cost allocation reports and AWS Opsworks to deploy and manage your infrastructure.
    2. Use AWS CloudWatch metrics and alerts along with resource tagging to deploy and manage your infrastructure.
    3. Use AWS Beanstalk and a version control system like GIT to deploy and manage your infrastructure.
    4. Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure.
  6. A user is usingCloudFormation to launch an EC2 instance and then configure an application after the instance is launched. The user wants the stack creation of ELB and AutoScaling to wait until the EC2 instance is launched and configured properly. How can the user configure this?
    1. It is not possible that the stack creation will wait until one service is created and launched
    2. The user can use the HoldCondition resource to wait for the creation of the other dependent resources
    3. The user can use the DependentCondition resource to hold the creation of the other dependent resources
    4. The user can use the WaitCondition resource to hold the creation of the other dependent resources
  7. A user has created a CloudFormation stack. The stack creates AWS services, such as EC2 instances, ELB, AutoScaling, and RDS. While creating the stack it created EC2, ELB and AutoScaling but failed to create RDS. What will CloudFormation do in this scenario?
    1. CloudFormation can never throw an error after launching a few services since it verifies all the steps before launching
    2. It will warn the user about the error and ask the user to manually create RDS
    3. Rollback all the changes and terminate all the created services
    4. It will wait for the user’s input about the error and correct the mistake after the input
  8. A user is planning to use AWS CloudFormation. Which of the below mentioned functionalities does not help him to correctly understand CloudFormation?
    1. CloudFormation follows the DevOps model for the creation of Dev & Test
    2. AWS CloudFormation does not charge the user for its service but only charges for the AWS resources created with it
    3. CloudFormation works with a wide variety of AWS services, such as EC2, EBS, VPC, IAM, S3, RDS, ELB, etc
    4. CloudFormation provides a set of application bootstrapping scripts which enables the user to install Software
  9. A customer is using AWS for Dev and Test. The customer wants to setup the Dev environment with CloudFormation. Which of the below mentioned steps are not required while using CloudFormation?
    1. Create a stack
    2. Configure a service
    3. Create and upload the template
    4. Provide the parameters configured as part of the template
  10. A marketing research company has developed a tracking system that collects user behavior during web marketing campaigns on behalf of their customers all over the world. The tracking system consists of an auto-scaled group of Amazon Elastic Compute Cloud (EC2) instances behind an elastic load balancer (ELB), and the collected data is stored in Amazon DynamoDB. After the campaign is terminated, the tracking system is torn down and the data is moved to Amazon Redshift, where it is aggregated, analyzed and used to generate detailed reports. The company wants to be able to instantiate new tracking systems in any region without any manual intervention and therefore adopted AWS CloudFormation. What needs to be done to make sure that the AWS CloudFormation template works in every AWS region? Choose 2 answers [PROFESSIONAL]
    1. IAM users with the right to start AWS CloudFormation stacks must be defined for every target region. (IAM users are global)
    2. The names of the Amazon DynamoDB tables must be different in every target region. (DynamoDB names should be unique only within a region)
    3. Use the built-in function of AWS CloudFormation to set the AvailabilityZone attribute of the ELB resource.
    4. Avoid using DeletionPolicies for EBS snapshots. (Don’t want the data to be retained)
    5. Use the built-in Mappings and FindInMap functions of AWS CloudFormation to refer to the AMI ID set in the ImageId attribute of the Auto Scaling::LaunchConfiguration resource.
  11. A gaming company adopted AWS CloudFormation to automate load -testing of their games. They have created an AWS CloudFormation template for each gaming environment and one for the load -testing stack. The load – testing stack creates an Amazon Relational Database Service (RDS) Postgres database and two web servers running on Amazon Elastic Compute Cloud (EC2) that send HTTP requests, measure response times, and write the results into the database. A test run usually takes between 15 and 30 minutes. Once the tests are done, the AWS CloudFormation stacks are torn down immediately. The test results written to the Amazon RDS database must remain accessible for visualization and analysis. Select possible solutions that allow access to the test results after the AWS CloudFormation load -testing stack is deleted. Choose 2 answers. [PROFESSIONAL]
    1. Define a deletion policy of type Retain for the Amazon QDS resource to assure that the RDS database is not deleted with the AWS CloudFormation stack.
    2. Define a deletion policy of type Snapshot for the Amazon RDS resource to assure that the RDS database can be restored after the AWS CloudFormation stack is deleted.
    3. Define automated backups with a backup retention period of 30 days for the Amazon RDS database and perform point -in -time recovery of the database after the AWS CloudFormation stack is deleted. (as the environment is required for limited time the automated backup will not serve the purpose)
    4. Define an Amazon RDS Read-Replica in the load-testing AWS CloudFormation stack and define a dependency relation between master and replica via the DependsOn attribute. (read replica not needed and will be deleted when the stack is deleted)
    5. Define an update policy to prevent deletion of the Amazon RDS database after the AWS CloudFormation stack is deleted. (UpdatePolicy does not apply to RDS)
  12. When working with AWS CloudFormation Templates what is the maximum number of stacks that you can create?
    1. 5000
    2. 500
    3. 2000 (Refer link – The limit keeps on changing to check for the latest)
    4. 100
  13. What happens, by default, when one of the resources in a CloudFormation stack cannot be created?
    1. Previously created resources are kept but the stack creation terminates
    2. Previously created resources are deleted and the stack creation terminates
    3. Stack creation continues, and the final results indicate which steps failed
    4. CloudFormation templates are parsed in advance so stack creation is guaranteed to succeed.
  14. You need to deploy an AWS stack in a repeatable manner across multiple environments. You have selected CloudFormation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by CloudFormation. How should you overcome this challenge? [PROFESSIONAL]
    1. Use a CloudFormation Custom Resource Template by selecting an API call to proxy for create, update, and delete actions. CloudFormation will use the AWS SDK, CLI, or API method of your choosing as the state transition function for the resource type you are modeling.
    2. Submit a ticket to the AWS Forums. AWS extends CloudFormation Resource Types by releasing tooling to the AWS Labs organization on GitHub. Their response time is usually 1 day, and they complete requests within a week or two.
    3. Instead of depending on CloudFormation, use Chef, Puppet, or Ansible to author Heat templates, which are declarative stack resource definitions that operate over the OpenStack hypervisor and cloud environment.
    4. Create a CloudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda. (Refer link)
  15. What is a circular dependency in AWS CloudFormation?
    1. When a Template references an earlier version of itself.
    2. When Nested Stacks depend on each other.
    3. When Resources form a DependOn loop. (Refer link, to resolve a dependency error, add a DependsOn attribute to resources that depend on other resources in the template. Some cases for e.g. EIP and VPC with IGW where EIP depends on IGW need explicitly declaration for the resources to be created in correct order)
    4. When a Template references a region, which references the original Template.
  16. You need to run a very large batch data processing job one time per day. The source data exists entirely in S3, and the output of the processing job should also be written to S3 when finished. If you need to version control this processing job and all setup and teardown logic for the system, what approach should you use?
    1. Model an AWS EMR job in AWS Elastic Beanstalk. (cannot directly model EMR Clusters)
    2. Model an AWS EMR job in AWS CloudFormation. (EMR cluster can be modeled using CloudFormation. Refer link)
    3. Model an AWS EMR job in AWS OpsWorks. (cannot directly model EMR Clusters)
    4. Model an AWS EMR job in AWS CLI Composer. (does not exist)
  17. Your company needs to automate 3 layers of a large cloud deployment. You want to be able to track this deployment’s evolution as it changes over time, and carefully control any alterations. What is a good way to automate a stack to meet these requirements? [PROFESSIONAL]
    1. Use OpsWorks Stacks with three layers to model the layering in your stack.
    2. Use CloudFormation Nested Stack Templates, with three child stacks to represent the three logical layers of your cloud. (CloudFormation allows source controlled, declarative templates as the basis for stack automation and Nested Stacks help achieve clean separation of layers while simultaneously providing a method to control all layers at once when needed)
    3. Use AWS Config to declare a configuration set that AWS should roll out to your cloud.
    4. Use Elastic Beanstalk Linked Applications, passing the important DNS entries between layers using the metadata interface.
  18. You have been asked to de-risk deployments at your company. Specifically, the CEO is concerned about outages that occur because of accidental inconsistencies between Staging and Production, which sometimes cause unexpected behaviors in Production even when Staging tests pass. You already use Docker to get high consistency between Staging and Production for the application environment on your EC2 instances. How do you further de-risk the rest of the execution environment, since in AWS, there are many service components you may use beyond EC2 virtual machines? [PROFESSIONAL]
    1. Develop models of your entire cloud system in CloudFormation. Use this model in Staging and Production to achieve greater parity. (Only CloudFormation’s JSON Templates allow declarative version control of repeatedly deployable models of entire AWS clouds. Refer link)
    2. Use AWS Config to force the Staging and Production stacks to have configuration parity. Any differences will be detected for you so you are aware of risks.
    3. Use AMIs to ensure the whole machine, including the kernel of the virual machines, is consistent, since Docker uses Linux Container (LXC) technology, and we need to make sure the container environment is consistent.
    4. Use AWS ECS and Docker clustering. This will make sure that the AMIs and machine sizes are the same across both environments.
  19. Which code snippet below returns the URL of a load balanced web site created in CloudFormation with an AWS::ElasticLoadBalancing::LoadBalancer resource name “ElasticLoad Balancer”? [Developer]
    1. “Fn::Join” : [“”, [ “http://”, {“Fn::GetAtt” : [ “ElasticLoadBalancer”,”DNSName”]}]] (Refer link)
    2. “Fn::Join” : [“”,[ “http://”, {“Fn::GetAtt” : [ “ElasticLoadBalancer”,”Url”]}]]
    3. “Fn::Join” : [“”, [ “http://”, {“Ref” : “ElasticLoadBalancerUrl”}]]
    4. “Fn::Join” : [“”, [ “http://”, {“Ref” : “ElasticLoadBalancerDNSName”}]]
  20. For AWS CloudFormation, which stack state refuses UpdateStack calls? [Developer]
    1. <code>UPDATE_ROLLBACK_FAILED</code> (Refer link)
    2. <code>UPDATE_ROLLBACK_COMPLETE</code>
    3. <code>UPDATE_COMPLETE</code>
    4. <code>CREATE_COMPLETE</code>
  21. Which of these is not a Pseudo Parameter in AWS CloudFormation? [Developer]
    1. AWS::StackName
    2. AWS::AccountId
    3. AWS::StackArn (Refer link)
    4. AWS::NotificationARNs
  22. Which of these is not an intrinsic function in AWS CloudFormation? [Developer]
    1. Fn::SplitValue (Refer link)
    2. Fn::FindInMap
    3. Fn::Select
    4. Fn::GetAZs
  23. Which of these is not a CloudFormation Helper Script? [Developer]
    1. cfn-signal
    2. cfn-hup
    3. cfn-request (Refer link)
    4. cfn-get-metadata
  24. What method should I use to author automation if I want to wait for a CloudFormation stack to finish completing in a script? [Developer]
    1. Event subscription using SQS.
    2. Event subscription using SNS.
    3. Poll using <code>ListStacks</code> / <code>list-stacks</code>. (Only polling will make a script wait to complete. ListStacks / list-stacks is a real method. Refer link)
    4. Poll using <code>GetStackStatus</code> / <code>get-stack-status</code>. (GetStackStatus / get-stack-status does not exist)
  25. Which status represents a failure state in AWS CloudFormation? [Developer]
    1. <code>UPDATE_COMPLETE_CLEANUP_IN_PROGRESS</code> (UPDATE_COMPLETE_CLEANUP_IN_PROGRESS means an update was successful, and CloudFormation is deleting any replaced, no longer used resources)
    2. <code>DELETE_COMPLETE_WITH_ARTIFACTS</code> (DELETE_COMPLETE_WITH_ARTIFACTS does not exist)
    3. <code>ROLLBACK_IN_PROGRESS</code> (ROLLBACK_IN_PROGRESS means an UpdateStack operation failed and the stack is in the process of trying to return to the valid, pre-update state Refer link)
    4. <code>ROLLBACK_FAILED</code> (ROLLBACK_FAILED is not a CloudFormation state but UPDATE_ROLLBACK_FAILED is)
  26. Which of these is not an intrinsic function in AWS CloudFormation? [Developer]
    1. Fn::Equals
    2. Fn::If
    3. Fn::Not
    4. Fn::Parse (Complete list of Intrinsic Functions: Fn::Base64, Fn::And, Fn::Equals, Fn::If, Fn::Not, Fn::Or, Fn::FindInMap, Fn::GetAtt, Fn::GetAZs, Fn::Join, Fn::Select, Refer link)
  27. You need to create a Route53 record automatically in CloudFormation when not running in production during all launches of a Template. How should you implement this? [Developer]
    1. Use a <code>Parameter</code> for <code>environment</code>, and add a <code>Condition</code> on the Route53 <code>Resource</code> in the template to create the record only when <code>environment</code> is not <code>production</code>. (Best way to do this is with one template, and a Condition on the resource. Route53 does not allow null strings for Refer link)
    2. Create two templates, one with the Route53 record value and one with a null value for the record. Use the one without it when deploying to production.
    3. Use a <code>Parameter</code> for <code>environment</code>, and add a <code>Condition</code> on the Route53 <code>Resource</code> in the template to create the record with a null string when <code>environment</code> is <code>production</code>.
    4. Create two templates, one with the Route53 record and one without it. Use the one without it when deploying to production.

References

AWS Intrusion Detection & Prevention System IDS/IPS

AWS Intrusion Detection & Prevention System IDS/IPS

  • An Intrusion Prevention System IPS
    • is an appliance that monitors and analyzes network traffic to detect malicious patterns and potentially harmful packets and prevent vulnerability exploits
    • Most IPS offer firewall, unified threat management and routing capabilities
  • An Intrusion Detection System IDS is
    • an appliance or capability that continuously monitors the environment
    • sends alerts when it detects malicious activity, policy violations or network & system attack from someone attempting to break into or compromise the system
    • produces reports for analysis.

Approaches for AWS IDS/IPS

Network Tap or SPAN

  • Traditional approach involves using a network Test Access Point (TAP) or Switch Port Analyzer (SPAN) to access & monitor all network traffic
  • Connection between the AWS Internet Gateway (IGW) and the Elastic Load Balancer would be an ideal place to capture all network traffic
  • However, there is no place to plug this in between IGW and ELB as there are no SPAN ports, network taps, or a concept of Layer 2 bridging

Packet Sniffing

  • It is not possible for a virtual instance running in promiscuous mode to receive or sniff traffic that is intended for a different virtual instance.
  • While interfaces can be placed into promiscuous mode, the hypervisor will not deliver any traffic to an instance that is not addressed to it.
  • Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic
  • So, promiscuous mode is not allowed

Host Based Firewall – Forward Deployed IDS

  • Deploy a network-based IDS on every instance you deploy IDS workload scales with your infrastructure
  • Host-based security software works well with highly distributed and scalable application architectures because network packet inspection is distributed across the entire software fleet
  • However, CPU-intensive process is deployed onto every single machine.

Host Based Firewall – Traffic Replication

  • An Agent is deployed on every instance to capture & replicate traffic for centralized analysis
  • Actual workload of network traffic analysis is not performed on the instance but on a separate server
  • Traffic capture and replication is still CPU-intensive (particularly on Windows machines.)
  • It significantly increases the internal network traffic in the environment as every inbound packet is duplicated in the transfer from the instance that captures the traffic to the instance that analyzes the traffic

AWS IDS IPS Solution 1

In-Line Firewall – Inbound IDS Tier

  • Add another tier to the application architecture where a load balancer sends all inbound traffic to a tier of instances that performs the network analysis for e.g. Third Party Solution Fortinet FortiGate
  • IDS workload is now isolated to a horizontally scalable tier in the architecture You have to maintain and manage another mission-critical elastic tier in the architecture

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A web company is looking to implement an intrusion detection and prevention system into their deployed VPC. This platform should have the ability to scale to thousands of instances running inside of the VPC. How should they architect their solution to achieve these goals?
    1. Configure an instance with monitoring software and the elastic network interface (ENI) set to promiscuous mode packet sniffing to see an traffic across the VPC. (virtual instance running in promiscuous mode to receive or“sniff” traffic)
    2. Create a second VPC and route all traffic from the primary application VPC through the second VPC where the scalable virtualized IDS/IPS platform resides.
    3. Configure servers running in the VPC using the host-based ‘route’ commands to send all traffic through the platform to a scalable virtualized IDS/IPS (host based routing is not allowed)
    4. Configure each host with an agent that collects all network traffic and sends that traffic to the IDS/IPS platform for inspection.
  2. You are designing an intrusion detection prevention (IDS/IPS) solution for a customer web application in a single VPC. You are considering the options for implementing IDS/IPS protection for traffic coming from the Internet. Which of the following options would you consider? (Choose 2 answers)
    1. Implement IDS/IPS agents on each Instance running In VPC
    2. Configure an instance in each subnet to switch its network interface card to promiscuous mode and analyze network traffic. (virtual instance running in promiscuous mode to receive or“sniff” traffic)
    3. Implement Elastic Load Balancing with SSL listeners In front of the web applications (ELB with SSL does not serve as IDS/IPS)
    4. Implement a reverse proxy layer in front of web servers and configure IDS/IPS agents on each reverse proxy server

References

AWS OpsWorks – Certification

AWS OpsWorks

  • AWS OpsWorks is a configuration management service that helps to configure and operate applications in a cloud enterprise by using Chef
  • OpsWorks Stacks and AWS OpsWorks for Chef Automate allows using Chef cookbooks and solutions for configuration management

OpsWorks Stacks

AWS OpsWorks Stacks

  • OpsWorks Stacks provides a simple and flexible way to create and manage stacks, groups of AWS resources like load balancers, web, application and database servers, and application deployed on them
  • OpsWorks Stacks helps deploy and monitor applications in the stacks.
  • Unlike OpsWorks for Chef Automate, OpsWorks Stacks does not require or create Chef servers; and performs some of the work of a Chef server itself
  • OpsWorks Stacks monitors instance health, and provisions new instances, when necessary, by using Auto Healing and Auto Scaling
  • OpsWorks Stacks integrates with IAM to control how users can interact with stacks, what stacks can do on the users behalf, what AWS resources an app can access etc
  • OpsWorks Stacks integrates with CloudWatch and CloudTrail to enable monitoring and logging
  • OpsWorks Stacks can be accessed globally and can be used to create and manage instances globally

Stacks

  • Stack is the core AWS OpsWorks Stacks component.
  • Stack is a container for AWS resources like EC2, RDS instances etc that have a common purpose and should be logically managed together
  • Stack helps manage the resources as a group and also defines some default configuration settings, such as the instances’ OS and AWS region
  • Stacks can also be run in VPC to be isolated from direct user interaction
  • Separate Stacks can be created for different environments like Dev, QA etc

Layers

  • Stacks help manage cloud resources in specialized groups called layers.
  • A layer represents a set of EC2 instances that serve a particular purpose, such as serving applications or hosting a database server.
  • Layers depend on Chef recipes to handle tasks such as installing packages on instances, deploying apps, and running scripts
  • Custom recipes and related files is packaged in one or more cookbooks and stored in a cookbook repository such S3 or Git

Recipes and LifeCycle Events

  • Layers depend on Chef recipes to handle tasks such as installing packages on instances, deploying apps, running scripts, and so on.
  • OpsWorks Stacks runs the recipes for each layer, even if the instance belongs to multiple layers for e.g. instance hosting both the application and the mysql server
  • AWS OpsWorks Stacks features is a set of lifecycle events – Setup, Configure, Deploy, Undeploy, and Shutdown – which automatically runs specified set of recipes at the appropriate time on each instance
    • Setup
      • Once a new instance has booted, OpsWorks triggers the Setup event, which runs recipes to set up the instance according to the layer configuration for e.g. installation of apache, PHP packages
      • Once setup is complete, AWS OpsWorks triggers a Deploy event, which runs recipes to deploy your application to the new instance.
    • Configure
      • Whenever an instance enters or leaves the online state, AWS OpsWorks triggers a Configure event on all instances in the stack.
      • Event runs each layer’s configure recipes to update configuration to reflect the current set of online instances for e.g. the HAProxy layer’s Configure recipes can modify the load balancer configuration to reflect any added or removed application server instances.
    • Deploy
      • OpsWorks triggers a Deploy event when the Deploy command is executed, to deploy the application to a set of application servers.
      • Event runs recipes on the application servers to deploy application and any related files from its repository to the layer’s instances.
    • Undeploy
      • OpsWorks triggers an Undeploy event when an app is deleted or  Undeploy command is executed to remove an app from a set of application servers.
      • Event runs recipes to remove all application versions and perform any additional cleanup tasks.
    • Shutdown
      • OpsWorks triggers a Shutdown event when an instance is being shut down, but before the underlying EC2 instance is actually terminated.
      • Event runs recipes to perform cleanup tasks such as shutting down services.
      • OpsWorks allows Shutdown recipes a configurable amount of time to perform their tasks, and then terminates the instance.

Instance

  • An instance represents a single computing resource for e.g. EC2 instance and it defines resource’s basic configuration, such as OS and size
  • OpsWorks Stacks create instances and adds them to a layer.
  • When the instance is started, OpsWorks Stacks launches an EC2 instance using the configuration settings specified by the instance and its layer.
  • After the EC2 instance has finished booting, OpsWorks Stacks installs an agent that handles communication between the instance and the service and runs the appropriate recipes in response to lifecycle events
  • OpsWorks Stacks supports instance auto-healing, whereby if an agent stops communicating with the service, OpsWorks Stacks automatically stops and restarts the instance
  • OpsWorks Stacks supports the following instance types
    • 24/7 instances – launched and stopped manually
    • Time based instances – run on scheduled time
    • Load based instances – automatically started and stopped based on configurable load metrics
  • Linux based computing resources created outside of the OpsWorks stacks for e.g. console or CLI can be added, incorporated and controlled through OpsWorks

Apps

  • An AWS OpsWorks Stacks app represents code that you want to run on an application server residing in the app repository like S3
  • App contains the information required to deploy the code to the appropriate application server instances.
  • When you deploy an app, AWS OpsWorks Stacks triggers a Deploy event, which runs the Deploy recipes on the stack’s instances.
  • OpsWorks supports the ability to deploy multiple apps per stack and per layer

OpsWorks Deployment Strategies

Refer to OpsWorks Deployment Strategies blog post for details

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are working with a customer who is using Chef configuration management in their data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?
    1. Amazon Simple Workflow Service
    2. AWS Elastic Beanstalk
    3. AWS CloudFormation
    4. AWS OpsWorks
  2. Your mission is to create a lights-out datacenter environment, and you plan to use AWS OpsWorks to accomplish this. First you created a stack and added an App Server layer with an instance running in it. Next you added an application to the instance, and now you need to deploy a MySQL RDS database instance. Which of the following answers accurately describe how to add a backend database server to an OpsWorks stack? Choose 3 answers
    1. Add a new database layer and then add recipes to the deploy actions of the database and App Server layers. (Refer link)
    2. Use OpsWorks’ “Clone Stack” feature to create a second RDS stack in another Availability Zone for redundancy in the event of a failure in the Primary AZ. To switch to the secondary RDS instance, set the [:database] attributes to values that are appropriate for your server which you can do by using custom JSON.
    3. The variables that characterize the RDS database connection—host, user, and so on—are set using the corresponding values from the deploy JSON’s [:deploy][:app_name][:database] attributes. (Refer link)
    4. Cookbook attributes are stored in a repository, so OpsWorks requires that the “password”: “your_password” attribute for the RDS instance must be encrypted using at least a 256-bit key.
    5. Set up the connection between the app server and the RDS layer by using a custom recipe. The recipe configures the app server as required, typically by creating a configuration file. The recipe gets the connection data such as the host and database name from a set of attributes in the stack configuration and deployment JSON that AWS OpsWorks installs on every instance. (Refer link)
  3. You are tasked with the migration of a highly trafficked node.js application to AWS. In order to comply with organizational standards Chef recipes must be used to configure the application servers that host this application and to support application lifecycle events. Which deployment option meets these requirements while minimizing administrative burden?
    1. Create a new stack within Opsworks add the appropriate layers to the stack and deploy the application
    2. Create a new application within Elastic Beanstalk and deploy this application to a new environment (need to comply with chef recipes)
    3. Launch a Node JS server from a community AMI and manually deploy the application to the launched EC2 instance
    4. Launch and configure Chef Server on an EC2 instance and leverage the AWS CLI to launch application servers and configure those instances using Chef.
  4. A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The main web application best runs on m2.xlarge instances since it is highly memory- bound. Each new deployment requires semi-automated creation and testing of a new AMI for the application servers which takes quite a while and is therefore only done once per week. Recently, a new chat feature has been implemented in node.js and waits to be integrated in the architecture. First tests show that the new component is CPU bound Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS OpsWorks as an application life cycle tool to simplify management of the application and reduce the deployment cycles. What configuration in AWS OpsWorks is necessary to integrate the new chat module in the most cost-efficient and flexible way?
    1. Create one AWS Ops Works stack, create one AWS Ops Works layer, create one custom recipe
    2. Create one AWS Ops Works stack, create two AWS Ops Works layers create one custom recipe (Single environment stack, two layers for java and node.js application using built-in recipes and custom recipe for DynamoDB connectivity only as other configuration. Refer link)
    3. Create two AWS Ops Works stacks, create two AWS Ops Works layers create one custom recipe
    4. Create two AWS Ops Works stacks, create two AWS Ops Works layers create two custom recipe
  5. You company runs a complex customer relations management system that consists of around 10 different software components all backed by the same Amazon Relational Database (RDS) database. You adopted AWS OpsWorks to simplify management and deployment of that application and created an AWS OpsWorks stack with layers for each of the individual components. An internal security policy requires that all instances should run on the latest Amazon Linux AMI and that instances must be replaced within one month after the latest Amazon Linux AMI has been released. AMI replacements should be done without incurring application downtime or capacity problems. You decide to write a script to be run as soon as a new Amazon Linux AMI is released. Which solutions support the security policy and meet your requirements? Choose 2 answers
    1. Assign a custom recipe to each layer, which replaces the underlying AMI. Use AWS OpsWorks life-cycle events to incrementally execute this custom recipe and update the instances with the new AMI.
    2. Create a new stack and layers with identical configuration, add instances with the latest Amazon Linux AMI specified as a custom AMI to the new layer, switch DNS to the new stack, and tear down the old stack. (Blue-Green Deployment)
    3. Identify all Amazon Elastic Compute Cloud (EC2) instances of your AWS OpsWorks stack, stop each instance, replace the AMI ID property with the ID of the latest Amazon Linux AMI ID, and restart the instance. To avoid downtime, make sure not more than one instance is stopped at the same time.
    4. Specify the latest Amazon Linux AMI as a custom AMI at the stack level, terminate instances of the stack and let AWS OpsWorks launch new instances with the new AMI. (Will lead to downtime)
    5. Add new instances with the latest Amazon Linux AMI specified as a custom AMI to all AWS OpsWorks layers of your stack, and terminate the old ones.
  6. When thinking of AWS OpsWorks, which of the following is not an instance type you can allocate in a stack layer?
    1. 24/7 instances (24/7 instances are supported and started manually and run until you stop them)
    2. Spot instances (Does not support spot instance directly but can be used with auto scaling Refer link)
    3. Time-based instances (Time-based instances are run by AWS OpsWorks on a specified daily and weekly schedule)
    4. Load-based instances (Load-based instances are automatically started and stopped by AWS OpsWorks, based on specified load metrics, such as CPU utilization)
  7. Which of the following tools does not directly support AWS OpsWorks, for monitoring your stacks?
    1. AWS Config (Refer link)
    2. Amazon CloudWatch Metrics (AWS OpsWorks uses CloudWatch to provide thirteen custom metrics with detailed monitoring for each instance in the stack)
    3. AWS CloudTrail (AWS OpsWorks integrates with CloudTrail to log every AWS OpsWorks API call and store the data in an S3 bucket)
    4. Amazon CloudWatch Logs (You can use Amazon CloudWatch Logs to monitor your stack’s system, application, and custom logs.)
  8. When thinking of AWS OpsWorks, which of the following is true?
    1. Stacks have many layers, layers have many instances.
    2. Instances have many stacks, stacks have many layers.
    3. Layers have many stacks, stacks have many instances.
    4. Layers have many instances, instances have many stacks.

References

AWS Elastic Beanstalk

AWS Elastic Beanstalk

  • AWS Elastic Beanstalk helps to quickly deploy and manage applications in the AWS Cloud without having to worry about the infrastructure that runs those applications.
  • Elastic Beanstalk reduces management complexity without restricting choice or control.
  • Elastic Beanstalk enables automated infrastructure management and code deployment, by simply uploading, for applications and includes
    • Application platform management
    • Capacity provisioning
    • Load Balancing
    • Auto Scaling
    • Code deployment
    • Health Monitoring
  • Once an application is uploaded, Elastic Beanstalk automatically launches an environment, creates and configures the AWS resources needed to run the code. After the environment is launched, it can be managed and used to deploy new application versions
  • AWS resources launched by Elastic Beanstalk are fully accessible i.e. EC2 instances can be SSHed into
  • Elastic Beanstalk provides developers and systems administrators an easy, fast way to deploy and manage the applications without having to worry about AWS infrastructure.
  • CloudFormation, using templates, is a better option if the internal AWS resources to be used are known and fine-grained control is needed

Elastic Beanstalk Components

Elastic Beanstalk Components

  • Application
    • An Elastic Beanstalk application is a logical collection of Elastic Beanstalk components, including environments, versions, and environment configurations.
  • Application Version
    • An application version refers to a specific, labeled iteration of deployable code for a web application
    • Applications can have many versions and each application version is unique and points to an S3 object
    • Multiple versions can be deployed for an Application for testing differences and help rollback to any version if the case of issues
  • Environment
    • An environment is a version that is deployed onto AWS resources
    • An environment runs a single application version at a time, but same application version can be deployed across multiple environments
    • When an environment is created, Elastic Beanstalk provisions the resources needed to run the application version you specified.
  • Environment Configuration
    • An environment configuration identifies a collection of parameters and settings that define how an environment and its associated resources behave
    • When an environment’s configuration settings are updated, Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new resources, depending upon the change
  • Configuration Template
    • A configuration template is a starting point for creating unique environment configurations

Elastic Beanstalk Architecture

Elastic Beanstalk Environment Tiers

  • Elastic Beanstalk environment requires an environment tier, platform, and
    environment type
  • Environment tier determines whether Elastic Beanstalk provisions resources to support a web application that handles HTTP(S) requests or a web application that handles background-processing tasks
  • Web Environment Tier
    • An environment tier whose web application processes web requests is known as a web server tier.
    • AWS resources created for a web environment tier include an Elastic Load Balancer, an Auto Scaling group, one or more EC2 instances
    • Every Environment has a CNAME URL pointing to the ELB, aliased in Route 53 to ELB URL
    • Each EC2 server instance that runs the application uses a container type, which defines the infrastructure topology and software stack
    • A software component called the host manager (HM) runs on each EC2 server instance and is responsible for
      • Deploying the application
      • Aggregating events and metrics for retrieval via the console, the API, or the command line
      • Generating instance-level events
      • Monitoring the application log files for critical errors
      • Monitoring the application server
      • Patching instance components
      • Rotating your application’s log files and publishing them to S3
  • Worker Environment Tier 
    • An environment tier whose web application runs background jobs is known as a worker tier
    • AWS resources created for a worker environment tier include an Auto Scaling group, one or more EC2 instances, and an IAM role.
    • For the worker environment tier, Elastic Beanstalk also creates and provisions an SQS queue, if one doesn’t exist
    • When a worker environment tier is launched, Elastic Beanstalk installs the necessary support files for the programming language of choice and a daemon on each EC2 instance in the Auto Scaling group reading from the same SQS queue
    • Daemon is responsible for pulling requests from an SQS queue and then sending the data to the web application running in the worker environment tier that will process those messages
    • Elastic Beanstalk worker environments support SQS dead letter queues which can be used to store messages that could not be successfully processed. Dead letter queue provides the ability to sideline, isolate and analyze the unsuccessfully processed messages
  • One environment cannot support two different environment tiers because each requires its own set of resources; a worker environment tier and a web server environment tier each require an Auto Scaling group, but Elastic Beanstalk supports only one Auto Scaling group per environment.

Elastic Beanstalk with Other AWS Services

  • Elastic Beanstalk supports VPC and launches AWS resources, such as instances, into the VPC
  • Elastic Beanstalk supports IAM and helps you securely control access to your AWS resources.
  • CloudFront can be used to distribute the content in S3 after an Elastic Beanstalk is created and deployed
  • CloudTrail
    • Elastic Beanstalk is integrated with CloudTrail, a service that captures all of the Elastic BeanstalkAPI calls and delivers the log files to an S3 bucket that you specify.
    • CloudTrail captures API calls from the Elastic Beanstalk console or from your code to the Elastic Beanstalk APIs and helps to determine the request made to Elastic Beanstalk, the source IP address from which the request was made, who made the request, when it was made, etc.
  • RDS
    • Elastic Beanstalk provides support for running RDS instances in the Elastic Beanstalk environment which is ideal for development and testing but not for production.
    • For a production environment, it is not recommended because it ties the lifecycle of the database instance to the lifecycle of the application’s environment. So it the Elastic beanstalk environment is deleted, the RDS instance is deleted as well
    • It is recommended to launch a database instance outside of the environment and configure the application to connect to it outside of the functionality provided by Elastic Beanstalk.
    • Using a database instance external to your environment requires additional security group and connection string configuration, but it also lets the application connect to the database from multiple environments, use database types not supported with integrated databases, perform blue/green deployments, and tear down your environment without affecting the database instance.
  • S3
    • Elastic Beanstalk creates an S3 bucket named elasticbeanstalk-region-account-id for each region in which environments are created.
    • Elastic Beanstalk uses the bucket to store application versions, logs, and other supporting files.
    • It applies a bucket policy to buckets it creates to allow environments to write to the bucket and prevent accidental deletion

Elastic Beanstalk Deployment Strategies

Refer to post AWS Elastic Beanstalk Deployment Strategies

Labs

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An organization is planning to use AWS for their production roll out. The organization wants to implement automation for deployment such that it will automatically create a LAMP stack, download the latest PHP installable from S3 and setup the ELB. Which of the below mentioned AWS services meets the requirement for making an orderly deployment of the software?
    1. AWS Elastic Beanstalk
    2. AWS CloudFront
    3. AWS CloudFormation
    4. AWS DevOps
  2. What does Amazon Elastic Beanstalk provide?
    1. A scalable storage appliance on top of Amazon Web Services.
    2. An application container on top of Amazon Web Services
    3. A service by this name doesn’t exist.
    4. A scalable cluster of EC2 instances
  3. You want to have multiple versions of your application running at the same time, with all versions launched via AWS Elastic Beanstalk. Is this possible?
    1. However if you have 2 AWS accounts this can be done
    2. AWS Elastic Beanstalk is not designed to support multiple running environments
    3. AWS Elastic Beanstalk is designed to support a number of multiple running environments
    4. However AWS Elastic Beanstalk is designed to support only 2 multiple running environments
  4. A .NET application that you manage is running in Elastic Beanstalk. Your developers tell you they will need access to application log files to debug issues that arise. The infrastructure will scale up and down. How can you ensure the developers will be able to access only the log files?
    1. Access the log files directly from Elastic Beanstalk
    2. Enable log file rotation to S3 within the Elastic Beanstalk configuration
    3. Ask your developers to enable log file rotation in the applications web.config file
    4. Connect to each Instance launched by Elastic Beanstalk and create a Windows Scheduled task to rotate the log files to S3
  5. Your team has a tomcat-based Java application you need to deploy into development, test and production environments. After some research, you opt to use Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC .The optimal setup for persistence and security that meets the above requirements would be the following. [PROFESSIONAL]
    1. Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets. (Not optimal for persistence as the RDS is associated with the Elastic Beanstalk lifecycle and would not live independently)
    2. Create your RDS instance separately and add its IP address to your application’s DB connection strings in your code. Alter its security group to allow access to it from hosts within your VPC’s IP address block. (RDS is connected using DNS endpoint only)
    3. Create your RDS instance separately and pass its DNS name to your app’s DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself. (Security group allows instances to access the RDS with new instances launched without any changes)
    4. Create your RDS instance separately and pass its DNS name to your DB connection string as an environment variable. Alter its security group to allow access to it from hosts in your application subnets. (Not optimal for security adding individual hosts)
  6. Your must architect the migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location. What options could you select to migrate the application to AWS? (Choose 2) [PROFESSIONAL]
    1. Create an AWS Elastic Beanstalk application using the custom web server platform. Specify the web server executable and the application project and source files. Enable log file rotation to Amazon Simple Storage Service (S3). (EB does not work with Custom server executable)
    2. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a custom layer. Create custom recipes to install Docker and to deploy your Docker container using the Dockerfile. Create custom recipes to install and configure the application to publish the logs to Amazon CloudWatch Logs (although this is one of the option, the last sentence mentions configure the application to push the logs to S3, which would need changes to application as it needs to use SDK or CLI)
    3. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a Docker layer that uses the Dockerfile. Create custom recipes to install and configure Amazon Kinesis to publish the logs into Amazon CloudWatch. (Kinesis not needed)
    4. Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Use Docker configuration with awslogs and EB with Docker)
    5. Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI. (Use VM Import/Export to create AMI and CloudWatch logs agent to log)
  7. Which of the following groups is AWS Elastic Beanstalk best suited for?
    1. Those who want to deploy and manage their applications within minutes in the AWS cloud.
    2. Those who want to privately store and manage Git repositories in the AWS cloud.
    3. Those who want to automate the deployment of applications to instances and to update the applications as required.
    4. Those who want to model, visualize, and automate the steps required to release software.
  8. When thinking of AWS Elastic Beanstalk’s model, which is true?
    1. Applications have many deployments, deployments have many environments.
    2. Environments have many applications, applications have many deployments.
    3. Applications have many environments, environments have many deployments. (Applications group logical services. Environments belong to Applications, and typically represent different deployment levels (dev, stage, prod, forth). Deployments belong to environments, and are pushes of bundles of code for the environments to run.)
    4. Deployments have many environments, environments have many applications.
  9. If you’re trying to configure an AWS Elastic Beanstalk worker tier for easy debugging if there are problems finishing queue jobs, what should you configure?
    1. Configure Rolling Deployments.
    2. Configure Enhanced Health Reporting
    3. Configure Blue-Green Deployments.
    4. Configure a Dead Letter Queue (Elastic Beanstalk worker environments support SQS dead letter queues, where worker can send messages that for some reason could not be successfully processed. Dead letter queue provides the ability to sideline, isolate and analyze the unsuccessfully processed messages. Refer link)
  10. When thinking of AWS Elastic Beanstalk, which statement is true?
    1. Worker tiers pull jobs from SNS.
    2. Worker tiers pull jobs from HTTP.
    3. Worker tiers pull jobs from JSON.
    4. Worker tiers pull jobs from SQS. (Elastic Beanstalk installs a daemon on each EC2 instance in the Auto Scaling group to process SQS messages in the worker environment. Refer link)
  11. You are building a Ruby on Rails application for internal, non-production use, which uses MySQL as a database. You want developers without very much AWS experience to be able to deploy new code with a single command line push. You also want to set this up as simply as possible. Which tool is ideal for this setup?
    1. AWS CloudFormation
    2. AWS OpsWorks
    3. AWS ELB + EC2 with CLI Push
    4. AWS Elastic Beanstalk
  12. What AWS products and features can be deployed by Elastic Beanstalk? Choose 3 answers.
    1. Auto scaling groups
    2. Route 53 hosted zones
    3. Elastic Load Balancers
    4. RDS Instances
    5. Elastic IP addresses
    6. SQS Queues
  13. AWS Elastic Beanstalk stores your application files and optionally server log files in ____.
    1. Amazon Storage Gateway
    2. Amazon Glacier
    3. Amazon EC2
    4. Amazon S3
  14. When you use the AWS Elastic Beanstalk console to deploy a new application ____.
    1. Need to upload each file separately
    2. Need to create each file and path
    3. Need to upload a source bundle
    4. Need to create each file

References

AWS_Elastic_Beanstalk_Developer_Guide

AWS Simple Email Service – SES – Certification

AWS Simple Email Service – SES

  • SES is a managed service and ideal for sending bulk emails at scale
  • SES acts as an outbound email server and eliminates the need to support own software or applications to do the heavy lifting of email transport
  • Existing email server can also be configure to send outgoing emails through SES with no change in any settings in the email clients
  • Maximum message size including attachments is 10 MB per message (after base64 encoding).

SES Characteristics

  • Compatible with SMTP
  • Applications can send email using a single API call in many supported languages Java, .Net, PHP, Perl, Ruby, HTTPs etc
  • Optimized for highest levels of uptime, availability and scales as per the demand
  • Provides sandbox environment for testing

Sending Limits

  • Production SES has a set of sending limits which include
    • Sending Quota – max number of emails in 24-hour period
    • Maximum Send Rate – max number of emails per second
  • SES automatically adjusts the limits upward as long as emails are of high quality and they are sent in a controlled manner, as any spike in the email sent might be considered to be spams.
  • Limits can also be raised by submitting a Quota increase request

SES Best Practices

  • Send high-quality and real production content that your recipients want
  • Only send to those who have signed up for the mail
  • Unsubscribe recipients who have not interacted with the business recently
  • Have low bounce and compliant rates and remove bounced or complained addresses, using SNS to monitor bounces and complaints, treating them as opt-out
  • Monitor the sending activity

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What does Amazon SES stand for?
    1. Simple Elastic Server
    2. Simple Email Service
    3. Software Email Solution
    4. Software Enabled Server
  2. Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are notified via email about order status and any critical issues with their orders such as payment failure. Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders. How can you implement the order fulfillment process while making sure that the emails are delivered reliably? [PROFESSIONAL]
    1. Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.
    2. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 Use the decider instance to send emails to customers.
    3. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.
    4. Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.

AWS EBS Snapshot

EBS Snapshot

  • EBS provides the ability to create snapshots (backups) of any EBS volume and write a copy of the data in the volume to S3, where it is stored redundantly in multiple Availability Zones
  • Snapshots can be used to create new volumes, increase the size of the volumes or replicate data across Availability Zones
  • Snapshots are incremental backups and store only the data that was changed from the time the last snapshot was taken.
  • Snapshots size can probably be smaller than the volume size as the data is compressed before being saved to S3
  • Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume.
  • EBS Snapshots can be used to migrate or create EBS Volumes in different AZs or regions.

Multi-Volume Snapshots

  • Snapshots can be used to create a backup of critical workloads, such as a large database or a file system that spans across multiple EBS volumes.
  • Multi-volume snapshots help take exact point-in-time, data coordinated, and crash-consistent snapshots across multiple EBS volumes attached to an EC2 instance.
  • It is no longer required to stop the instance or to coordinate between volumes to ensure crash consistency because snapshots are automatically taken across multiple EBS volumes.

EBS Snapshot creation

  • Snapshots can be created from EBS volumes periodically and are point-in-time snapshots.
  • Snapshots are incremental and only store the blocks on the device that changed since the last snapshot was taken
  • Snapshots occur asynchronously; the point-in-time snapshot is created immediately while it takes time to upload the modified blocks to S3. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume.
  • Snapshots can be taken from in-use volumes. However, snapshots will only capture the data that was written to the EBS volumes at the time the snapshot command is issued excluding the data which is cached by any applications of OS
  • Recommended ways to create a Snapshot from an EBS volume are
    • Pause all file writes to the volume
    • Unmount the Volume -> Take Snapshot -> Remount the Volume
    • Stop the instance – Take Snapshot (for root EBS volumes)
  • EBS volume created based on a snapshot
    • begins as an exact replica of the original volume that was used to create the snapshot.
    • replicated volume loads data in the background so that it can be used immediately.
    • If data that hasn’t been loaded yet is accessed, the volume immediately downloads the requested data from S3 and then continues loading the rest of the volume’s data in the background.

EBS Snapshot Deletion

  • When a snapshot is deleted only the data exclusive to that snapshot is removed.
  • Deleting previous snapshots of a volume does not affect the ability to restore volumes from later snapshots of that volume.
  • Active snapshots contain all of the information needed to restore your data (from the time the snapshot was taken) to a new EBS volume.
  • Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume.
  • Snapshot of the root device of an EBS volume used by a registered AMI can’t be deleted. AMI needs to be deregistered to be able to delete the snapshot.

EBS Snapshot Copy

  • Snapshots are constrained to the region in which they are created and can be used to launch EBS volumes within the same region only
  • Snapshots can be copied across regions to make it easier to leverage multiple regions for geographical expansion, data center migration, and disaster recovery
  • Snapshots are copied with S3 server-side encryption (256-bit Advanced Encryption Standard) to encrypt the data and the snapshot copy receives a snapshot ID that’s different from the original snapshot’s ID.
  • User-defined tags are not copied from the source to the new snapshot.
  • First Snapshot copy to another region is always a full copy, while the rest are always incremental.
  • When a snapshot is copied,
    • it can be encrypted if currently unencrypted or
    • can be encrypted using a different encryption key. Changing the encryption status of a snapshot or using a non-default EBS CMK during a copy operation always results in a full copy (not incremental)

EBS Snapshot Sharing

  • Snapshots can be shared by making them public or with specific AWS accounts by modifying the access permissions of the snapshots
  • Encrypted snapshots cannot be made available publicly.
  • Only unencrypted snapshots can be shared. Encrypted snapshots cannot be shared between accounts or made public
  • Encrypted snapshot can be shared with specific AWS accounts by sharing the custom CMK key used must also be shared to encrypt it
  • Cross-account permissions may be applied to a custom key either when it is created or at a later time.
  • Users, with access to snapshots, can copy the snapshot and create their own EBS volumes based on the snapshot while the original snapshot remains unaffected
  • AWS prevents you from sharing snapshots that were encrypted with the default CMK

EBS Snapshot Encryption

  • EBS snapshots fully support EBS encryption.
  • Snapshots of encrypted volumes are automatically encrypted
  • Volumes created from encrypted snapshots are automatically encrypted
  • All data in flight between the instance and the volume is encrypted
  • Volumes created from an unencrypted snapshot owned or have access to can be encrypted on the fly.
  • Unencrypted snapshot can be encrypted during the copy process
  • Encrypted snapshots that you own or have access to, can be encrypted with a different key during the copy process.
  • First snapshot of an encrypted volume that has been created from an unencrypted snapshot is always a full snapshot.
  • First snapshot of a re-encrypted volume, which has a different CMK compared to the source snapshot, is always a full snapshot.

EBS Snapshot Lifecycle Automation

  • Amazon Data Lifecycle Manager can be used to automate the creation, retention, and deletion of snapshots taken to back up the EBS volumes.
  • Automating snapshot management helps you to:
    • Protect valuable data by enforcing a regular backup schedule.
    • Retain backups as required by auditors or internal compliance.
    • Reduce storage costs by deleting outdated backups.

AWS Certification Exam Practice Questions

    • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
    • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
    • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
    • Open to further feedback, discussion and correction.
  1. An existing application stores sensitive information on a non-boot Amazon EBS data volume attached to an Amazon Elastic Compute Cloud instance. Which of the following approaches would protect the sensitive data on an Amazon EBS volume?
    1. Upload your customer keys to AWS CloudHSM. Associate the Amazon EBS volume with AWS CloudHSM. Remount the Amazon EBS volume.
    2. Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume.
    3. Unmount the EBS volume. Toggle the encryption attribute to True. Re-mount the Amazon EBS volume.
    4. Snapshot the current Amazon EBS volume. Restore the snapshot to a new, encrypted Amazon EBS volume. Mount the Amazon EBS volume (Need to create a snapshot, create an encrypted copy of snapshot and then create a EBS volume and mount it)
  2. Is it possible to access your EBS snapshots?
    1. Yes, through the Amazon S3 APIs.
    2. Yes, through the Amazon EC2 APIs
    3. No, EBS snapshots cannot be accessed; they can only be used to create a new EBS volume.
    4. EBS doesn’t provide snapshots.
  3. Which of the following approaches provides the lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data?
    1. Maintain two snapshots: the original snapshot and the latest incremental snapshot
    2. Maintain a volume snapshot; subsequent snapshots will overwrite one another
    3. Maintain a single snapshot the latest snapshot is both Incremental and complete
    4. Maintain the most current snapshot, archive the original and incremental to Amazon Glacier.
  4. Which procedure for backing up a relational database on EC2 that is using a set of RAIDed EBS volumes for storage minimizes the time during which the database cannot be written to and results in a consistent backup?
    1. Detach EBS volumes, 2. Start EBS snapshot of volumes, 3. Re-attach EBS volumes
    2. Stop the EC2 Instance. 2. Snapshot the EBS volumes
    3. Suspend disk I/O, 2. Create an image of the EC2 Instance, 3. Resume disk I/O
    4. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Resume disk I/O
    5. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Wait for snapshots to complete, 4. Resume disk I/O
  5. How can an EBS volume that is currently attached to an EC2 instance be migrated from one Availability Zone to another?
    1. Detach the volume and attach it to another EC2 instance in the other AZ.
    2. Simply create a new volume in the other AZ and specify the original volume as the source.
    3. Create a snapshot of the volume, and create a new volume from the snapshot in the other AZ
    4. Detach the volume, then use the ec2-migrate-volume command to move it to another AZ.
  6. How are the EBS snapshots saved on Amazon S3?
    1. Exponentially
    2. Incrementally
    3. EBS snapshots are not stored in the Amazon S3
    4. Decrementally
  7. EBS Snapshots occur _____
    1. Asynchronously
    2. Synchronously
    3. Weekly
  8. What will be the status of the snapshot until the snapshot is complete?
    1. Running
    2. Working
    3. Progressing
    4. Pending
  9. Before I delete an EBS volume, what can I do if I want to recreate the volume later?
    1. Create a copy of the EBS volume (not a snapshot)
    2. Create and Store a snapshot of the volume
    3. Download the content to an EC2 instance
    4. Back up the data in to a physical disk
  10. Which of the following are true regarding encrypted Amazon Elastic Block Store (EBS) volumes? Choose 2 answers
    1. Supported on all Amazon EBS volume types
    2. Snapshots are automatically encrypted
    3. Available to all instance types
    4. Existing volumes can be encrypted
    5. Shared volumes can be encrypted
  11. Amazon EBS snapshots have which of the following two characteristics? (Choose 2.) Choose 2 answers
    1. EBS snapshots only save incremental changes from snapshot to snapshot
    2. EBS snapshots can be created in real-time without stopping an EC2 instance (the snapshot can be taken real time however it will not be consistent and the recommended way is to stop or freeze the IO)
    3. EBS snapshots can only be restored to an EBS volume of the same size or smaller (EBS volume restored from snapshots need to be of the same size of larger size)
    4. EBS snapshots can only be restored and mounted to an instance in the same Availability Zone as the original EBS volume (Snapshots are specific to Region and can be used to create a volume in any AZ and does not depend on the original EBS volume AZ)
  12. A user is planning to schedule a backup for an EBS volume. The user wants security of the snapshot data. How can the user achieve data encryption with a snapshot?
    1. Use encrypted EBS volumes so that the snapshot will be encrypted by AWS (Refer link)
    2. While creating a snapshot select the snapshot with encryption
    3. By default the snapshot is encrypted by AWS
    4. Enable server side encryption for the snapshot using S3
  13. A sys admin is trying to understand EBS snapshots. Which of the below mentioned statements will not be useful to the admin to understand the concepts about a snapshot?
    1. Snapshot is synchronous
    2. It is recommended to stop the instance before taking a snapshot for consistent data
    3. Snapshot is incremental
    4. Snapshot captures the data that has been written to the hard disk when the snapshot command was executed
  14. When creation of an EBS snapshot is initiated but not completed, the EBS volume
    1. Cannot be detached or attached to an EC2 instance until me snapshot completes
    2. Can be used in read-only mode while me snapshot is in progress
    3. Can be used while the snapshot is in progress
    4. Cannot be used until the snapshot completes
  15. You have a server with a 5O0GB Amazon EBS data volume. The volume is 80% full. You need to back up the volume at regular intervals and be able to re-create the volume in a new Availability Zone in the shortest time possible. All applications using the volume can be paused for a period of a few minutes with no discernible user impact. Which of the following backup methods will best fulfill your requirements?
    1. Take periodic snapshots of the EBS volume
    2. Use a third-party Incremental backup application to back up to Amazon Glacier
    3. Periodically back up all data to a single compressed archive and archive to Amazon S3 using a parallelized multi-part upload
    4. Create another EBS volume in the second Availability Zone attach it to the Amazon EC2 instance, and use a disk manager to mirror me two disks
  16. A user is creating a snapshot of an EBS volume. Which of the below statements is incorrect in relation to the creation of an EBS snapshot?
    1. Its incremental
    2. It can be used to launch a new instance
    3. It is stored in the same AZ as the volume (stored in the same region)
    4. It is a point in time backup of the EBS volume
  17. A user has created a snapshot of an EBS volume. Which of the below mentioned usage cases is not possible with respect to a snapshot?
    1. Mirroring the volume from one AZ to another AZ
    2. Launch an instance
    3. Decrease the volume size
    4. Increase the size of the volume
  18. What is true of the way that encryption works with EBS?
    1. Snapshotting an encrypted volume makes an encrypted snapshot; restoring an encrypted snapshot creates an encrypted volume when specified / requested.
    2. Snapshotting an encrypted volume makes an encrypted snapshot when specified / requested; restoring an encrypted snapshot creates an encrypted volume when specified / requested.
    3. Snapshotting an encrypted volume makes an encrypted snapshot; restoring an encrypted snapshot always creates an encrypted volume.
    4. Snapshotting an encrypted volume makes an encrypted snapshot when specified / requested; restoring an encrypted snapshot always creates an encrypted volume.
  19. Why are more frequent snapshots of EBS Volumes faster?
    1. Blocks in EBS Volumes are allocated lazily, since while logically separated from other EBS Volumes, Volumes often share the same physical hardware. Snapshotting the first time forces full block range allocation, so the second snapshot doesn’t need to perform the allocation phase and is faster.
    2. The snapshots are incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot.
    3. AWS provisions more disk throughput for burst capacity during snapshots if the drive has been pre-warmed by snapshotting and reading all blocks.
    4. The drive is pre-warmed, so block access is more rapid for volumes when every block on the device has already been read at least one time.
  20. Which is not a restriction on AWS EBS Snapshots?
    1. Snapshots which are shared cannot be used as a basis for other snapshots (Snapshots shared with other users are usable in full by the recipient, including but limited to the ability to base modified volumes and snapshots)
    2. You cannot share a snapshot containing an AWS Access Key ID or AWS Secret Access Key
    3. You cannot share encrypted snapshots (NOTE: this has be updated partially where you can share a encrypted snapshot with other accounts)
    4. Snapshot restorations are restricted to the region in which the snapshots are created
  21. There is a very serious outage at AWS. EC2 is not affected, but your EC2 instance deployment scripts stopped working in the region with the outage. What might be the issue?
    1. The AWS Console is down, so your CLI commands do not work.
    2. S3 is unavailable, so you can’t create EBS volumes from a snapshot you use to deploy new volumes. (EBS volume snapshots are stored in S3. If S3 is unavailable, snapshots are unavailable)
    3. AWS turns off the <code>DeployCode</code> API call when there are major outages, to protect from system floods.
    4. None of the other answers make sense. If EC2 is not affected, it must be some other issue.

AWS EC2 VM Import/Export

EC2 VM Import/Export

  • EC2 VM Import/Export enables importing virtual machine (VM) images from existing virtualization environment to EC2, and then export them back to the on-premises environment
  • EC2 VM Import/Export enables
    • migration of applications and workloads to EC2,
    • coping VM image catalog to EC2, or
    • create a repository of VM images for backup and disaster recovery
    • to leverage previous investments in building VMs by migrating the VMs to EC2.
  • Supported file formats are: VMware ESX VMDK images, Citrix Xen VHD images, Microsoft Hyper-V VHD images, and RAW images
  • For VMware vSphere, AWS Connector for vCenter can be used to export a VM from VMware and import it into Amazon EC2
  • For Microsoft Systems Center, AWS Systems Manager for Microsoft SCVMM can be used to import Windows VMs from SCVMM to EC2

AWS EC2 VM Import/Export

EC2 VM Import/Export features

  • ability to import a VM from a virtualization environment to EC2 as an Amazon Machine Image (AMI), which can be used to launch an EC2 instance
  • ability to import a VM from a virtualization environment to EC2 as an EC2 instance, which is initially in a stopped state. AMI can be created from it
  • ability to export a VM that was previously imported from the virtualization environment
  • ability to import disks as EBS snapshots.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are responsible for a legacy web application whose server environment is approaching end of life. You would like to migrate this application to AWS as quickly as possible, since the application environment currently has the following limitations: The VM’s single 10GB VMDK is almost full. The virtual network interface still uses the 10Mbps driver, which leaves your 100Mbps WAN connection completely underutilized. It is currently running on a highly customized Windows VM within a VMware environment: You do not have the installation media. This is a mission critical application with an RTO (Recovery Time Objective) of 8 hours. RPO (Recovery Point Objective) of 1 hour. How could you best migrate this application to AWS while meeting your business continuity requirements?
    1. Use the EC2 VM Import Connector for vCenter to import the VM into EC2
    2. Use Import/Export to import the VM as an EBS snapshot and attach to EC2. (Import/Export is used to transfer large amount of data)
    3. Use S3 to create a backup of the VM and restore the data into EC2.
    4. Use the ec2-bundle-instance API to Import an Image of the VM into EC2 (only bundles an windows instance store instance)
  2. You are tasked with moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there’s no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose 3 answers)
    1. An AWS Direct Connect link between the VPC and the network housing the internal services (VPN or a DX for communication)
    2. An Internet Gateway to allow a VPN connection. (Virtual and Customer gateway is needed)
    3. An Elastic IP address on the VPC instance (Don’t need a EIP as private subnets can also interact with on-premises network)
    4. An IP address space that does not conflict with the one on-premises (IP address cannot conflict)
    5. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies’ IP addresses (Route 53 is not required)
    6. A VM Import of the current virtual machine (VM Import to copy the VM to AWS as there is no documentation it can’t be configured from scratch)

AWS EBS Performance – Certification

AWS EBS Performance Tips

EBS Performance depends on several factores including I/O characteristics and the configuration of instances and volumes and can be improved using PIOPS, EBS-Optimized instances, Pre-Warming and RAIDed configuration

EBS-Optimized or 10 Gigabit Network Instances

  • An EBS-Optimized instance uses an optimized configuration stack and provides additional, dedicated capacity for EBS I/O.
  • Optimization provides the best performance for the EBS volumes by minimizing contention between EBS I/O and other traffic from a instance.
  • EBS-Optimized instances deliver dedicated throughput to EBS, with options between 500 Mbps and 4,000 Mbps, depending on the instance type used
  • Not all instance types support EBS-Optimization
  • Some Instance type enable EBS-Optimization by default, while it can be enabled for some.
  • EBS optimization enabled for an instance, that is not EBS-Optimized by default, an additional low, hourly fee for the dedicated capacity is charged
  • When attached to an EBS–optimized instance,
    • General Purpose (SSD) volumes are designed to deliver within 10% of their baseline and burst performance 99.9% of the time in a given year
    • Provisioned IOPS (SSD) volumes are designed to deliver within 10% of their provisioned performance 99.9 percent of the time in a given year.

EBS Volume Initialization – Pre-warming

  • New EBS volumes receive their maximum performance the moment that they are available and DO NOT require initialization (pre-warming).
  • EBS volumes needed a pre-warming, previously, before being used to get maximum performance to start with. Pre-warming of the volume was possible by writing to the entire volume with 0 for new volumes or reading entire volume for volumes from snapshots
  • Storage blocks on volumes that were restored from snapshots must be initialized (pulled down from S3 and written to the volume) before the block can be accessed
  • This preliminary action takes time and can cause a significant increase in the latency of an I/O operation the first time each block is accessed.

RAID Configuration

  • EBS volumes can be striped, if a single EBS volume does not meet the performance and more is required.
  • Striping volumes allows pushing tens of thousands of IOPS.
  • EBS volumes are already replicated across multiple servers in an AZ for availability and durability, so AWS generally recommend striping for performance rather than durability.
  • For greater I/O performance than can be achieved with a single volume, RAID 0 can stripe multiple volumes together; for on-instance redundancy, RAID 1 can mirror two volumes together.
  • RAID 0 allows I/O distribution across all volumes in a stripe, allowing straight gains with each addition.
  • RAID 1 can be used for durability to mirror volumes, but in this case, it requires more EC2 to EBS bandwidth as the data is written to multiple volumes simultaneously and should be used with EBS–optimization.
  • EBS volume data is replicated across multiple servers in an AZ to prevent the loss of data from the failure of any single component
  • AWS doesn’t recommend RAID 5 and 6 because the parity write operations of these modes consume the IOPS available for the volumes and can result in 20-30% fewer usable IOPS than a RAID 0.
  • A 2-volume RAID 0 config can outperform a 4-volume RAID 6 that costs twice as much.

RAID Configuration

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A user is trying to pre-warm a blank EBS volume attached to a Linux instance. Which of the below mentioned steps should be performed by the user?
    1. There is no need to pre-warm an EBS volume (with latest update no pre-warming is needed)
    2. Contact AWS support to pre-warm (This used to be the case before, but pre warming is not necessary now)
    3. Unmount the volume before pre-warming
    4. Format the device
  2. A user has created an EBS volume of 10 GB and attached it to a running instance. The user is trying to access EBS for first time. Which of the below mentioned options is the correct statement with respect to a first time EBS access?
    1. The volume will show a size of 8 GB
    2. The volume will show a loss of the IOPS performance the first time (the volume needed to be wiped cleaned before for new volumes, however pre warming is not needed any more)
    3. The volume will be blank
    4. If the EBS is mounted it will ask the user to create a file system
  3. You are running a database on an EC2 instance, with the data stored on Elastic Block Store (EBS) for persistence At times throughout the day, you are seeing large variance in the response times of the database queries Looking into the instance with the isolate command you see a lot of wait time on the disk volume that the database’s data is stored on. What two ways can you improve the performance of the database’s storage while maintaining the current persistence of the data? Choose 2 answers
    1. Move to an SSD backed instance
    2. Move the database to an EBS-Optimized Instance
    3. Use Provisioned IOPs EBS
    4. Use the ephemeral storage on an m2.4xLarge Instance Instead
  4. You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached. The EC2 Instance is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS. The two EBS volumes are configured as a single RAID 0 device, and each Provisioned IOPS volume is provisioned with 4,000 IOPS (4000 16KB reads or writes) for a total of 16,000 random IOPS on the instance. The EC2 Instance initially delivers the expected 16,000 IOPS random read and write performance. Sometime later in order to increase the total random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID. Each volume is provisioned to 4,000 IOPS like the original four for a total of 24,000 IOPS on the EC2 instance Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%, but the total random IOPS measured at the instance level does not increase at all. What is the problem and a valid solution?
    1. Larger storage volumes support higher Provisioned IOPS rates: increase the provisioned volume storage of each of the 6 EBS volumes to 1TB.
    2. EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS-Optimized instance that provides larger throughput. (EC2 Instance types have limit on max throughput and would require larger instance types to provide 24000 IOPS)
    3. Small block sizes cause performance degradation, limiting the I’O throughput, configure the instance device driver and file system to use 64KB blocks to increase throughput.
    4. RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but increase each Provisioned IOPS EBS volume to 6.000 IOPS.
    5. The standard EBS instance root volume limits the total IOPS rate, change the instant root volume to also be a 500GB 4,000 Provisioned IOPS volume
  5. A user has deployed an application on an EBS backed EC2 instance. For a better performance of application, it requires dedicated EC2 to EBS traffic. How can the user achieve this?
    1. Launch the EC2 instance as EBS provisioned with PIOPS EBS
    2. Launch the EC2 instance as EBS enhanced with PIOPS EBS
    3. Launch the EC2 instance as EBS dedicated with PIOPS EBS
    4. Launch the EC2 instance as EBS optimized with PIOPS EBS