AWS Auto Scaling

Auto Scaling Overview

  • Auto Scaling provides the ability to ensure a correct number of EC2 instances are always running to handle the load of the application
  • Auto Scaling helps
    • to achieve better fault tolerance, better availability and cost management.
    • helps specify scaling policies that can be used to launch and terminate EC2 instances to handle any increase or decrease in demand.
  • Auto Scaling attempts to distribute instances evenly between the AZs that are enabled for the Auto Scaling group.
  • Auto Scaling does this by attempting to launch new instances in the AZ with the fewest instances. If the attempt fails, it attempts to launch the instances in another AZ until it succeeds.

Auto Scaling Components

AWS Auto Scaling

Auto Scaling Groups – ASG

  • Auto Scaling groups are the core of Auto Scaling and contain a collection of EC2 instances that share similar characteristics and are treated as a logical grouping for the purposes of automatic scaling and management.
  • ASG requires
    • Launch configuration OR Launch Template
      • determine the EC2 template to use for launching the instance
    • Minimum & Maximum capacity
      • determine the number of instances when an autoscaling policy is applied.
      • Number of instances cannot grow beyond these boundaries
    • Desired capacity
      • to determine the number of instances the ASG must maintain at all times. If missing, it equals the minimum size. 
      • Desired capacity is different from minimum capacity.
      • An Auto Scaling group’s desired capacity is the default number of instances that should be running. A group’s minimum capacity is the fewest number of instances the group can have running
    • Availability Zones or Subnets in which the instances will be launched.
    • Metrics & Health Checks
      • metrics to determine when it should launch or terminate instances and health checks to determine if the instance is healthy or not
  • ASG starts by launching a desired capacity of instances and maintains this number by performing periodic health checks.
  • If an instance becomes unhealthy, the ASG terminates and launches a new instance.
  • ASG can also use scaling policies to increase or decrease the number of instances automatically to meet changing demands
  • An ASG can contain EC2 instances in one or more AZs within the same region.
  • ASGs cannot span multiple regions.
  • ASG can launch On-Demand Instances, Spot Instances, or both when configured to use a launch template.
  • To merge separate single-zone ASGs into a single ASG spanning multiple AZs, rezone one of the single-zone groups into a multi-zone group, and then delete the other groups. This process works for groups with or without a load balancer, as long as the new multi-zone group is in one of the same AZs as the original single-zone groups.
  • ASG can be associated with a single launch configuration or template
  • As the Launch Configuration can’t be modified once created, the only way to update the Launch Configuration for an ASG is to create a new one and associate it with the ASG.
  • When the launch configuration for the ASG is changed, any new instances launched, use the new configuration parameters, but the existing instances are not affected.
  • ASG can be deleted from CLI, if it has no running instances else need to set the minimum and desired capacity to 0. This is handled automatically when deleting an ASG from the AWS management console.

Launch Configuration

  • Launch configuration is an instance configuration template that an ASG uses to launch EC2 instances.
  • Launch configuration is similar to EC2 configuration and involves the selection of the Amazon Machine Image (AMI), block devices, key pair, instance type, security groups, user data, EC2 instance monitoring, instance profile, kernel, ramdisk, the instance tenancy, whether the instance has a public IP address, and is EBS-optimized.
  • Launch configuration can be associated with multiple ASGs
  • Launch configuration can’t be modified after creation and needs to be created new if any modification is required.
  • Basic or detailed monitoring for the instances in the ASG can be enabled when a launch configuration is created.
  • By default, basic monitoring is enabled when you create the launch configuration using the AWS Management Console, and detailed monitoring is enabled when you create the launch configuration using the AWS CLI or an API
  • AWS recommends using Launch Template instead.

Launch Template

  • A Launch Template is similar to a launch configuration, with additional features, and is recommended by AWS.
  • Launch Template allows multiple versions of a template to be defined.
  • With versioning, a subset of the full set of parameters can be created and then reused to create other templates or template versions for e.g, a default template that defines common configuration parameters can be created and allow the other parameters to be specified as part of another version of the same template.
  • Launch Template allows the selection of both Spot and On-Demand Instances or multiple instance types.
  • Launch templates support EC2 Dedicated Hosts. Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use.
  • Launch templates provide the following features
    • Support for multiple instance types and purchase options in a single ASG.
    • Launching Spot Instances with the capacity-optimized allocation strategy.
    • Support for launching instances into existing Capacity Reservations through an ASG.
    • Support for unlimited mode for burstable performance instances.
    • Support for Dedicated Hosts.
    • Combining CPU architectures such as Intel, AMD, and ARM (Graviton2)
    • Improved governance through IAM controls and versioning.
    • Automating instance deployment with Instance Refresh.

Auto Scaling Launch Configuration vs Launch Template

Auto Scaling Launch Template vs Launch Configuration

Auto Scaling Policies

Refer blog post @ Auto Scaling Policies

Auto Scaling Cooldown Period

  • Auto Scaling Cooldown period is a configurable setting for the ASG that helps to ensure that Auto Scaling doesn’t launch or terminate additional instances before the previous scaling activity takes effect and allows the newly launched instances to start handling traffic and reduce load
  • When ASG dynamically scales using a simple scaling policy and launches an instance, Auto Scaling suspends the scaling activities for the cooldown period (default 300 seconds) to complete before resuming scaling activities
  • Example Use Case
    • You configure a scale out alarm to increase the capacity, if the CPU utilization increases more than 80%
    • A CPU spike occurs and causes the alarm to be triggered, Auto Scaling launches a new instance
    • However, it would take time for the newly launched instance to be configured, instantiated, and started, let’s say 5 mins
    • Without a cooldown period, if another CPU spike occurs Auto Scaling would launch a new instance again and this would continue for 5 mins till the previously launched instance is up and running and started handling traffic
    • With a cooldown period, Auto Scaling would suspend the activity for the specified time period enabling the newly launched instance to start handling traffic and reduce the load.
    • After the cooldown period, Auto Scaling resumes acting on the alarms
  • When manually scaling the ASG, the default is not to wait for the cooldown period but can be overridden to honour the cooldown period.
  • Note that if an instance becomes unhealthy, Auto Scaling does not wait for the cooldown period to complete before replacing the unhealthy instance.
  • Cooldown periods are automatically applied to dynamic scaling activities for simple scaling policies and are not supported for step scaling policies.

Auto Scaling Termination Policy

  • Termination policy helps Auto Scaling decide which instances it should terminate first when Auto Scaling automatically scales in.
  • Auto Scaling specifies a default termination policy and also provides the ability to create a customized one.

Default Termination Policy

Default termination policy helps ensure that the network architecture spans AZs evenly and instances are selected for termination as follows:-

  1. Selection of Availability Zone
    • selects the AZ, in multiple AZs environments, with the most instances and at least one instance that is not protected from scale in.
    • selects the AZ with instances that use the oldest launch configuration, if there is more than one AZ with the same number of instances
  2. Selection of an Instance within the Availability Zone
    • terminates the unprotected instance using the oldest launch configuration if one exists.
    • terminates unprotected instances closest to the next billing hour, If multiple instances with the oldest launch configuration. This helps in maximizing the use of the EC2 instances that have an hourly charge while minimizing the number of hours billed for EC2 usage.
    • terminates instances at random, if more than one unprotected instance is closest to the next billing hour.

Customized Termination Policy

  1. Auto Scaling first assesses the AZs for any imbalance. If an AZ has more instances than the other AZs that are used by the group, then it applies the specified termination policy on the instances from the imbalanced AZ
  2. If the Availability Zones used by the group are balanced, then Auto Scaling applies the specified termination policy.
  3. Following Customized Termination, policies are supported
    1. OldestInstance – terminates the oldest instance in the group and can be useful to upgrade to new instance types
    2. NewestInstance – terminates the newest instance in the group and can be useful when testing a new launch configuration
    3. OldestLaunchConfiguration – terminates instances that have the oldest launch configuration
    4. OldestLaunchTemplate – terminates instances that have the oldest launch template
    5. ClosestToNextInstanceHour – terminates instances that are closest to the next billing hour and helps to maximize the use of your instances and manage costs.
    6. Default – terminates as per the default termination policy

Instance Refresh

  • Instance refresh can be used to update the instances in the ASG instead of manually replacing instances a few at a time.
  • An instance refresh can be helpful when you have a new AMI or a new user data script.
  • Instance refresh also helps configure the minimum healthy percentage, instance warmup, and checkpoints.
  • To use an instance refresh
    • Create a new launch template that specifies the new AMI or user data script.
    • Start an instance refresh to begin updating the instances in the group immediately.
    • EC2 Auto Scaling starts performing a rolling replacement of the instances.

Instance Protection

  • Instance protection controls whether Auto Scaling can terminate a particular instance or not.
  • Instance protection can be enabled on an ASG or an individual instance as well, at any time
  • Instances launched within an ASG with Instance protection enabled would inherit the property.
  • Instance protection starts as soon as the instance is InService and if the Instance is detached, it loses its Instance protection
  • If all instances in an ASG are protected from termination during scale in and a scale-in event occurs, it can’t terminate any instance and will decrement the desired capacity.
  • Instance protection does not protect for the below cases
    • Manual termination through the EC2 console, the terminate-instances command, or the TerminateInstances API.
    • If it fails health checks and must be replaced
    • Spot instances in an ASG from interruption

Standby State

Auto Scaling allows putting the InService instances in the Standby state during which the instance is still a part of the ASG but does not serve any requests. This can be used to either troubleshoot an instance or update an instance and return the instance back to service.

  • An instance can be put into Standby state and it will continue to remain in the Standby state unless exited.
  • Auto Scaling, by default, decrements the desired capacity for the group and prevents it from launching a new instance. If no decrement is selected, it would launch a new instance
  • When the instance is in the standby state, the instance can be updated or used for troubleshooting.
  • If a load balancer is associated with Auto Scaling, the instance is automatically deregistered when the instance is in Standby state and registered again when the instance exits the Standby state

Suspension

  • Auto Scaling processes can be suspended and then resumed. This can be very useful to investigate a configuration problem or debug an issue with the application, without triggering the Auto Scaling process.
  • Auto Scaling also performs Administrative Suspension where it would suspend processes for ASGs if the ASG has been trying to launch instances for over 24 hours but has not succeeded in launching any instances.
  • Auto Scaling processes include
    • Launch – Adds a new EC2 instance to the group, increasing its capacity.
    • Terminate – Removes an EC2 instance from the group, decreasing its capacity.
    • HealthCheck – Checks the health of the instances.
    • ReplaceUnhealthy – Terminates instances that are marked as unhealthy and subsequently creates new instances to replace them.
    • AlarmNotification – Accepts notifications from CloudWatch alarms that are associated with the group. If suspended, Auto Scaling does not automatically execute policies that would be triggered by an alarm
    • ScheduledActions – Performs scheduled actions that you create.
    • AddToLoadBalancer – Adds instances to the load balancer when they are launched.
    • InstanceRefresh – Terminates and replaces instances using the instance refresh feature.
    • AZRebalance – Balances the number of EC2 instances in the group across the Availability Zones in the region.
      • If an AZ either is removed from the ASG or becomes unhealthy or unavailable, Auto Scaling launches new instances in an unaffected AZ before terminating the unhealthy or unavailable instances
      • When the unhealthy AZ returns to a healthy state, Auto Scaling automatically redistributes the instances evenly across the Availability Zones for the group.
      • Note that if you suspend AZRebalance and a scale out or scale in event occurs, Auto Scaling still tries to balance the Availability Zones for e.g. during scale out, it launches the instance in the Availability Zone with the fewest instances.
      • If you suspend Launch, AZRebalance neither launches new instances nor terminates existing instances. This is because AZRebalance terminates instances only after launching the replacement instances.
      • If you suspend Terminate, the ASG can grow up to 10% larger than its maximum size, because Auto Scaling allows this temporarily during rebalancing activities. If it cannot terminate instances, your ASG could remain above its maximum size until the Terminate process is resumed

Auto Scaling Lifecycle

Refer to blog post @ Auto Scaling Lifecycle

Autoscaling & ELB

Refer to blog post @ Autoscaling & ELB

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A user is trying to setup a scheduled scaling activity using Auto Scaling. The user wants to setup the recurring schedule. Which of the below mentioned parameters is not required in this case?
    1. Maximum size
    2. Auto Scaling group name
    3. End time
    4. Recurrence value
  2. A user has configured Auto Scaling with 3 instances. The user had created a new AMI after updating one of the instances. If the user wants to terminate two specific instances to ensure that Auto Scaling launches an instances with the new launch configuration, which command should he run?
    1. as-delete-instance-in-auto-scaling-group <Instance ID> –no-decrement-desired-capacity
    2. as-terminate-instance-in-auto-scaling-group <Instance ID> –update-desired-capacity
    3. as-terminate-instance-in-auto-scaling-group <Instance ID> –decrement-desired-capacity
    4. as-terminate-instance-in-auto-scaling-group <Instance ID> –no-decrement-desired-capacity
  3. A user is planning to scale up an application by 8 AM and scale down by 7 PM daily using Auto Scaling. What should the user do in this case?
    1. Setup the scaling policy to scale up and down based on the CloudWatch alarms
    2. User should increase the desired capacity at 8 AM and decrease it by 7 PM manually
    3. User should setup a batch process which launches the EC2 instance at a specific time
    4. Setup scheduled actions to scale up or down at a specific time
  4. An organization has setup Auto Scaling with ELB. Due to some manual error, one of the instances got rebooted. Thus, it failed the Auto Scaling health check. Auto Scaling has marked it for replacement. How can the system admin ensure that the instance does not get terminated?
    1. Update the Auto Scaling group to ignore the instance reboot event
    2. It is not possible to change the status once it is marked for replacement
    3. Manually add that instance to the Auto Scaling group after reboot to avoid replacement
    4. Change the health of the instance to healthy using the Auto Scaling commands
  5. A user has configured Auto Scaling with the minimum capacity as 2 and the desired capacity as 2. The user is trying to terminate one of the existing instance with the command: as-terminate-instance-in-auto-scaling-group<Instance ID> –decrement-desired-capacity. What will Auto Scaling do in this scenario?
    1. Terminates the instance and does not launch a new instance
    2. Terminates the instance and updates the desired capacity to 1
    3. Terminates the instance and updates the desired capacity & minimum size to 1
    4. Throws an error
  6. An organization has configured Auto Scaling for hosting their application. The system admin wants to understand the Auto Scaling health check process. If the instance is unhealthy, Auto Scaling launches an instance and terminates the unhealthy instance. What is the order execution?
    1. Auto Scaling launches a new instance first and then terminates the unhealthy instance
    2. Auto Scaling performs the launch and terminate processes in a random order
    3. Auto Scaling launches and terminates the instances simultaneously
    4. Auto Scaling terminates the instance first and then launches a new instance
  7. A user has configured ELB with Auto Scaling. The user suspended the Auto Scaling terminate process only for a while. What will happen to the availability zone rebalancing process (AZRebalance) during this period?
    1. Auto Scaling will not launch or terminate any instances
    2. Auto Scaling will allow the instances to grow more than the maximum size
    3. Auto Scaling will keep launching instances till the maximum instance size
    4. It is not possible to suspend the terminate process while keeping the launch active
  8. An organization has configured Auto Scaling with ELB. There is a memory issue in the application which is causing CPU utilization to go above 90%. The higher CPU usage triggers an event for Auto Scaling as per the scaling policy. If the user wants to find the root cause inside the application without triggering a scaling activity, how can he achieve this?
    1. Stop the scaling process until research is completed
    2. It is not possible to find the root cause from that instance without triggering scaling
    3. Delete Auto Scaling until research is completed
    4. Suspend the scaling process until research is completed
  9. A user has configured ELB with Auto Scaling. The user suspended the Auto Scaling Alarm Notification (which notifies Auto Scaling for CloudWatch alarms) process for a while. What will Auto Scaling do during this period?
    1. AWS will not receive the alarms from CloudWatch
    2. AWS will receive the alarms but will not execute the Auto Scaling policy
    3. Auto Scaling will execute the policy but it will not launch the instances until the process is resumed
    4. It is not possible to suspend the AlarmNotification process
  10. An organization has configured two single availability zones. The Auto Scaling groups are configured in separate zones. The user wants to merge the groups such that one group spans across multiple zones. How can the user configure this?
    1. Run the command as-join-auto-scaling-group to join the two groups
    2. Run the command as-update-auto-scaling-group to configure one group to span across zones and delete the other group
    3. Run the command as-copy-auto-scaling-group to join the two groups
    4. Run the command as-merge-auto-scaling-group to merge the groups
  11. An organization has configured Auto Scaling with ELB. One of the instance health check returns the status as Impaired to Auto Scaling. What will Auto Scaling do in this scenario?
    1. Perform a health check until cool down before declaring that the instance has failed
    2. Terminate the instance and launch a new instance
    3. Notify the user using SNS for the failed state
    4. Notify ELB to stop sending traffic to the impaired instance
  12. A user has setup an Auto Scaling group. The group has failed to launch a single instance for more than 24 hours. What will happen to Auto Scaling in this condition
    1. Auto Scaling will keep trying to launch the instance for 72 hours
    2. Auto Scaling will suspend the scaling process
    3. Auto Scaling will start an instance in a separate region
    4. The Auto Scaling group will be terminated automatically
  13. A user is planning to setup infrastructure on AWS for the Christmas sales. The user is planning to use Auto Scaling based on the schedule for proactive scaling. What advise would you give to the user?
    1. It is good to schedule now because if the user forgets later on it will not scale up
    2. The scaling should be setup only one week before Christmas
    3. Wait till end of November before scheduling the activity
    4. It is not advisable to use scheduled based scaling
  14. A user is trying to setup a recurring Auto Scaling process. The user has setup one process to scale up every day at 8 am and scale down at 7 PM. The user is trying to setup another recurring process which scales up on the 1st of every month at 8 AM and scales down the same day at 7 PM. What will Auto Scaling do in this scenario
    1. Auto Scaling will execute both processes but will add just one instance on the 1st
    2. Auto Scaling will add two instances on the 1st of the month
    3. Auto Scaling will schedule both the processes but execute only one process randomly
    4. Auto Scaling will throw an error since there is a conflict in the schedule of two separate Auto Scaling Processes
  15. A sys admin is trying to understand the Auto Scaling activities. Which of the below mentioned processes is not performed by Auto Scaling?
    1. Reboot Instance
    2. Schedule Actions
    3. Replace Unhealthy
    4. Availability Zone Re-Balancing
  16. You have started a new job and are reviewing your company’s infrastructure on AWS. You notice one web application where they have an Elastic Load Balancer in front of web instances in an Auto Scaling Group. When you check the metrics for the ELB in CloudWatch you see four healthy instances in Availability Zone (AZ) A and zero in AZ B. There are zero unhealthy instances. What do you need to fix to balance the instances across AZs?
    1. Set the ELB to only be attached to another AZ
    2. Make sure Auto Scaling is configured to launch in both AZs
    3. Make sure your AMI is available in both AZs
    4. Make sure the maximum size of the Auto Scaling Group is greater than 4
  17. You have been asked to leverage Amazon VPC EC2 and SQS to implement an application that submits and receives millions of messages per second to a message queue. You want to ensure your application has sufficient bandwidth between your EC2 instances and SQS. Which option will provide the most scalable solution for communicating between the application and SQS?
    1. Ensure the application instances are properly configured with an Elastic Load Balancer
    2. Ensure the application instances are launched in private subnets with the EBS-optimized option enabled
    3. Ensure the application instances are launched in public subnets with the associate-public-IP-address=trueoption enabled
    4. Launch application instances in private subnets with an Auto Scaling group and Auto Scaling triggers configured to watch the SQS queue size
  18. You have decided to change the Instance type for instances running in your application tier that are using Auto Scaling. In which area below would you change the instance type definition?
    1. Auto Scaling launch configuration
    2. Auto Scaling group
    3. Auto Scaling policy
    4. Auto Scaling tags
  19. A user is trying to delete an Auto Scaling group from CLI. Which of the below mentioned steps are to be performed by the user?
    1. Terminate the instances with the ec2-terminate-instance command
    2. Terminate the Auto Scaling instances with the as-terminate-instance command
    3. Set the minimum size and desired capacity to 0
    4. There is no need to change the capacity. Run the as-delete-group command and it will reset all values to 0
  20. A user has created a web application with Auto Scaling. The user is regularly monitoring the application and he observed that the traffic is highest on Thursday and Friday between 8 AM to 6 PM. What is the best solution to handle scaling in this case?
    1. Add a new instance manually by 8 AM Thursday and terminate the same by 6 PM Friday
    2. Schedule Auto Scaling to scale up by 8 AM Thursday and scale down after 6 PM on Friday
    3. Schedule a policy which may scale up every day at 8 AM and scales down by 6 PM
    4. Configure a batch process to add a instance by 8 AM and remove it by Friday 6 PM
  21. A user has configured the Auto Scaling group with the minimum capacity as 3 and the maximum capacity as 5. When the user configures the AS group, how many instances will Auto Scaling launch?
    1. 3
    2. 0
    3. 5
    4. 2
  22. A sys admin is maintaining an application on AWS. The application is installed on EC2 and user has configured ELB and Auto Scaling. Considering future load increase, the user is planning to launch new servers proactively so that they get registered with ELB. How can the user add these instances with Auto Scaling?
    1. Increase the desired capacity of the Auto Scaling group
    2. Increase the maximum limit of the Auto Scaling group
    3. Launch an instance manually and register it with ELB on the fly
    4. Decrease the minimum limit of the Auto Scaling group
  23. In reviewing the auto scaling events for your application you notice that your application is scaling up and down multiple times in the same hour. What design choice could you make to optimize for the cost while preserving elasticity? Choose 2 answers.
    1. Modify the Amazon CloudWatch alarm period that triggers your auto scaling scale down policy.
    2. Modify the Auto scaling group termination policy to terminate the oldest instance first.
    3. Modify the Auto scaling policy to use scheduled scaling actions.
    4. Modify the Auto scaling group cool down timers.
    5. Modify the Auto scaling group termination policy to terminate newest instance first.
  24. You have a business critical two tier web app currently deployed in two availability zones in a single region, using Elastic Load Balancing and Auto Scaling. The app depends on synchronous replication (very low latency connectivity) at the database layer. The application needs to remain fully available even if one application Availability Zone goes off-line, and Auto scaling cannot launch new instances in the remaining Availability Zones. How can the current architecture be enhanced to ensure this? [PROFESSIONAL]
    1. Deploy in two regions using Weighted Round Robin (WRR), with Auto Scaling minimums set for 100% peak load per region.
    2. Deploy in three AZs, with Auto Scaling minimum set to handle 50% peak load per zone.
    3. Deploy in three AZs, with Auto Scaling minimum set to handle 33% peak load per zone. (Loss of one AZ will handle only 66% if the autoscaling also fails)
    4. Deploy in two regions using Weighted Round Robin (WRR), with Auto Scaling minimums set for 50% peak load per region.
  25. A user has created a launch configuration for Auto Scaling where CloudWatch detailed monitoring is disabled. The user wants to now enable detailed monitoring. How can the user achieve this?
    1. Update the Launch config with CLI to set InstanceMonitoringDisabled = false
    2. The user should change the Auto Scaling group from the AWS console to enable detailed monitoring
    3. Update the Launch config with CLI to set InstanceMonitoring.Enabled = true
    4. Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group
  26. A user has created an Auto Scaling group with default configurations from CLI. The user wants to setup the CloudWatch alarm on the EC2 instances, which are launched by the Auto Scaling group. The user has setup an alarm to monitor the CPU utilization every minute. Which of the below mentioned statements is true?
    1. It will fetch the data at every minute but the four data points [corresponding to 4 minutes] will not have value since the EC2 basic monitoring metrics are collected every five minutes
    2. It will fetch the data at every minute as detailed monitoring on EC2 will be enabled by the default launch configuration of Auto Scaling
    3. The alarm creation will fail since the user has not enabled detailed monitoring on the EC2 instances
    4. The user has to first enable detailed monitoring on the EC2 instances to support alarm monitoring at every minute
  27. A customer has a website which shows all the deals available across the market. The site experiences a load of 5 large EC2 instances generally. However, a week before Thanksgiving vacation they encounter a load of almost 20 large instances. The load during that period varies over the day based on the office timings. Which of the below mentioned solutions is cost effective as well as help the website achieve better performance?
    1. Keep only 10 instances running and manually launch 10 instances every day during office hours.
    2. Setup to run 10 instances during the pre-vacation period and only scale up during the office time by launching 10 more instances using the AutoScaling schedule.
    3. During the pre-vacation period setup a scenario where the organization has 15 instances running and 5 instances to scale up and down using Auto Scaling based on the network I/O policy.
    4. During the pre-vacation period setup 20 instances to run continuously.
  28. When Auto Scaling is launching a new instance based on condition, which of the below mentioned policies will it follow?
    1. Based on the criteria defined with cross zone Load balancing
    2. Launch an instance which has the highest load distribution
    3. Launch an instance in the AZ with the fewest instances
    4. Launch an instance in the AZ which has the highest instances
  29. The user has created multiple AutoScaling groups. The user is trying to create a new AS group but it fails. How can the user know that he has reached the AS group limit specified by AutoScaling in that region?
    1. Run the command: as-describe-account-limits
    2. Run the command: as-describe-group-limits
    3. Run the command: as-max-account-limits
    4. Run the command: as-list-account-limits
  30. A user is trying to save some cost on the AWS services. Which of the below mentioned options will not help him save cost?
    1. Delete the unutilized EBS volumes once the instance is terminated
    2. Delete the Auto Scaling launch configuration after the instances are terminated (Auto Scaling Launch config does not cost anything)
    3. Release the elastic IP if not required once the instance is terminated
    4. Delete the AWS ELB after the instances are terminated
  31. To scale up the AWS resources using manual Auto Scaling, which of the below mentioned parameters should the user change?
    1. Maximum capacity
    2. Desired capacity
    3. Preferred capacity
    4. Current capacity
  32. For AWS Auto Scaling, what is the first transition state an existing instance enters after leaving steady state in Standby mode?
    1. Detaching
    2. Terminating:Wait
    3. Pending (You can put any instance that is in an InService state into a Standby state. This enables you to remove the instance from service, troubleshoot or make changes to it, and then put it back into service. Instances in a Standby state continue to be managed by the Auto Scaling group. However, they are not an active part of your application until you put them back into service. Refer link)
    4. EnteringStandby
  33. For AWS Auto Scaling, what is the first transition state an instance enters after leaving steady state when scaling in due to health check failure or decreased load?
    1. Terminating (When Auto Scaling responds to a scale in event, it terminates one or more instances. These instances are detached from the Auto Scaling group and enter the Terminating state. Refer link)
    2. Detaching
    3. Terminating:Wait
    4. EnteringStandby
  34. A user has setup Auto Scaling with ELB on the EC2 instances. The user wants to configure that whenever the CPU utilization is below 10%, Auto Scaling should remove one instance. How can the user configure this?
    1. The user can get an email using SNS when the CPU utilization is less than 10%. The user can use the desired capacity of Auto Scaling to remove the instance
    2. Use CloudWatch to monitor the data and Auto Scaling to remove the instances using scheduled actions
    3. Configure CloudWatch to send a notification to Auto Scaling Launch configuration when the CPU utilization is less than 10% and configure the Auto Scaling policy to remove the instance
    4. Configure CloudWatch to send a notification to the Auto Scaling group when the CPU Utilization is less than 10% and configure the Auto Scaling policy to remove the instance
  35. A user has enabled detailed CloudWatch metric monitoring on an Auto Scaling group. Which of the below mentioned metrics will help the user identify the total number of instances in an Auto Scaling group including pending, terminating and running instances?
    1. GroupTotalInstances (Refer link)
    2. GroupSumInstances
    3. It is not possible to get a count of all the three metrics together. The user has to find the individual number of running, terminating and pending instances and sum it
    4. GroupInstancesCount
  36. Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency then dispatched to your manufacturing plant for production quality control packaging shipment and payment processing. If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step. Customers are notified via email about order status and any critical issues with their orders such as payment failure. Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders. How can you implement the order fulfillment process while making sure that the emails are delivered reliably? [PROFESSIONAL]
    1. Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.
    2. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 Use the decider instance to send emails to customers.
    3. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.
    4. Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.

References

AWS_Auto_Scaling_Developer_Guide

 

59 thoughts on “AWS Auto Scaling

    1. Hi Sir,
      Its regarding question 23 can you please explain possible modifications that cloud watch alarm and cool down timers can do in this case that reduce cost and preserve elasticity.

      1. As the scaling activity is happening quite frequently, the reasons would either be that the alarms
        configured are causing the auto scaling to scale up and down fast or the cool down timers are small due to which the auto
        scaling activity is triggered before the new instance gets a chance to handle traffic.
        Option B is wrong as terminating oldest instance would help save cost but would not prevent the auto scaling from scale
        up/down cycle.
        Option C is wrong as scheduled scaling only helps when the pattern is known
        Option E is wrong as terminating newest instance would increase cost but also would not prevent the auto scaling from scale
        up/down cycle.

  1. Question(22)
    Can you clarify why increase desired and not maximum ? it’s a future load , so i thought that maximum would be better to manage the load without paying for not used resources.

    as much as i understand that desired is the number which Auto Scale will reach even if the not needed.

    1. Key point in the question is proactively, so you should set desired to build up the instances beforehand.

    1. Thanks, there is a correction to the statement.
      Auto Scaling group can be deleted from CLI, if it has no running instances else need to set the minimum and desired capacity to 0. This is handled automatically when deleting an ASG from AWS management console.

  2. Question 17 I think the answer is B. The question asks how to ensure scalability between the application and SQS, so private subnet and EBS optimized makes sense. Your answer, D, answers a different question around how to scale based on queue size.

    1. Scalability between SQS and Application can be improved by the Application being able to handle millions of messages.
      Having instances in private subnet with EBS optimized would not improve the handling capacity of the EC2 instances.
      EBS–optimized instance provides additional, dedicated capacity for Amazon EBS I/O but not with SQS.

      1. EBS optimized takes EBS disk traffic off the main network interface, leaving more bandwidth for SQS. The question asks for the best bandwidth to both sending and receiving – option D is just for receiving and doesn’t help sending at all. However of course SQS bandwidth scales linearly with the number of instances, but again D is all about receive and doesn’t mention sending.

        I checked the cloud guru forums, and it’s 3 votes for B, 4 for D, but the moderator voted for B. So it’s one of those tricky questions.

        I guess the question is which answer is least wrong, as none are quite right. The best answer is probably EBS optimized to help provide bandwidth for sending along with scaling the processing instances based on queue size, so a combination of B and D.

        Questions like this are why 2 minutes per question is tricky.

        1. Agreed, its always tricky for these kind of open ended questions, where its open to interpretation.
          Professional is quite intensive to for time for sure, so unless very well prepared it can get very tough.

  3. Hi Jayendra,

    Thanks for publishing such a useful blog.

    Could you please explain answer to question #1? Why answer is A and not C?

    As per AWS documentation even End time and recurrence value is not required.

    Thanks in advance

    1. Hi Priti,

      The question is not about optional and required. But what parameters are valid which is causing the confusion.
      Refer to Auto Scaling AWS documentation.

  4. For Q13 :
    I can see that you can create scheduling event way into the future .. why does the recommendation has to be wait till end of november? why not do it now ?
    Any thoughts from you ?

      1. Hi Jayendra,

        I tried this just now. I am able to set the start time to be 1st Dec 2017. Today is 9th Oct 2017. Can you please throw some light on this: “This date and time can be up to one month in the future.” ? At the link mentioned by you, the aforementioned quoted statement is mentioned in the output of the describe-scheduled-actions command. Is it the case of document defect where the doc has not been updated when previously such a limitation was there?

        Thanks!
        Mahesh

      2. Hello Mr. Jay,
        is the answer still valid?
        Q.13
        A. Wait till end of November before scheduling the activity

        1. Seems to be still. Refer the AWS documentation.
          StartTime -> (timestamp)
          The date and time that the action is scheduled to begin. This date and time can be up to one month in the future.

  5. Q14 :
    I am able to set up a scheduling event daily at 9AM starting from Aug 1 – Aug 31.
    Also a scheduling event weekly on monday starting from Aug 1 – Aug 31.
    with different min, max , desired capacities.
    In this case there will a conflict on every monday. But I am allowed to do it.
    from my observation it complained only when the start date and time of the two scheduled events are the same. It did not seem to mind the intersect that happens during the time period they were set up. Any thoughts from you ?

    1. Pleas eIgnore this .. as long as the times conflict in the period aws is not allowing to save.

  6. You have a great blog and great set of questions probably batter than any other paid content. Thank you for such an effort. I should have thanked you in the first post itself.
    Q19:

    I have never used aws CLI / API. so may be I am missing some thing very fundamantal here. to delete ASG why not use the delete autoscaling command instead of setting desired capacity to zero ?
    I see that there is a commanad for that
    http://docs.aws.amazon.com/cli/latest/reference/autoscaling/delete-auto-scaling-group.html

    1. As all the instances needs to be delete before and the other options are not valid. To terminate all instances before deleting the Auto Scaling group, call update-auto-scaling-group and set the minimum size and desired capacity of the Auto Scaling group to zero.

  7. Could you please say opinion about practise questions on Braincert.com.

    It seems architect level than associate. Kindly share your opinion on this.

    Does we really need in depth knowledge as sampled in those questions to complete the associate exam ?

    1. Practice questions on Braincert are Associate level, professional level practice exams are much more complex, longer in prose and almost always involve multiple AWS services in the solution.

  8. If my database server crossed threshold value (ex. CPU reaches 92% but my threshold value is 80%) how can I troubleshot the issue.

    1. you can look into the monitoring stats and you can also check the logs to check the slow, long running queries.
      You can configure alerts to notify you when it reaches 70% or 80%, so that you can check at that time.

  9. Which AWS services are Self scalable by default ?

    I know Lambda is self scalable (You do not have to scale your Lambda functions – AWS Lambda scales them automatically on your behalf).

    Can you provide any such AWS services that are auto scalable and doesnt’ require any configurations?

    The other service I am thinking is SQS ( Amazon SQS can scale transparently to handle the load without any provisioning instructions from you).

    1. ELB, DynamoDB are auto scalable. ELB scales automatically as per the demand. DynamoDB with auto scaling will scale automatically. API Gateway scales automatically. Most of the Custom AWS managed services (Not RDS as there is underlying DB) should scale automatically.

  10. Hi Jayendra,

    Please could you advice for the below question.

    An Auto-Scaling group spans 3 AZs and currently has 4 running EC2 instances. When
    Auto Scaling needs to terminate an EC2 instance by default, AutoScaling will:
    Choose 2 answers
    A. Allow at least five minutes for Windows/Linux shutdown scripts to complete, before
    terminating the instance.
    B. Terminate the instance with the least active network connections. If multiple instances
    meet this criterion, one will be randomly selected.
    C. Send an SNS notification, if configured to do so.
    D. Terminate an instance in the AZ which currently has 2 running EC2 instances.
    E. Randomly select one of the 3 AZs, and then terminate an instance in that AZ.

    1. Correct answer should be C & D as Auto Scaling would select the AZ with most instances as per the default termination policy and send notification through SNS.

  11. Please add this important info:

    If you suspend AddToLoadBalancer, Auto Scaling launches the instances but does not add them to the load balancer or target group. If you resume the AddToLoadBalancer process, Auto Scaling resumes adding instances to the load balancer or target group when they are launched. However, Auto Scaling does not add the instances that were launched while this process was suspended. You must register those instances manually.

  12. Hi Jayendra,

    Please could you advice for the below question.

    You have set up an Auto Scaling group. The cool down period for the Auto Scaling group is 7 minutes. The first
    instance is launched after 3 minutes, while the second instance is launched after 4 minutes. How many
    minutes after the first instance is launched will Auto Scaling accept another scaling actMty request?
    A. 11 minutes
    B. 7 minutes
    C. 10 minutes
    D. 14 minutes

    Thanks!

    1. Hi Edivando, it should be 11 minutes, as it would take the last instance launch time

      Refer AWS documentation

      With multiple instances, the cooldown period (either the default cooldown or the scaling-specific cooldown) takes effect starting when the last instance launches.

  13. Thanks for the great blog. I have a query on below statement..
    Auto Scaling increases the desired capacity of the group by the number of instances being attached. But if the number of instances being attached plus the desired capacity exceeds the maximum size of the group, the request fails.

    The above holds true when we trigger using SDK? because, using AWS console if i set Min = 2 and Max=12 and if the scale out policy defines to increase 100% when cpu>60, in this case when first policy triggers it’s 2×2, second policy 4×4 and when third time policy triggers it will add 4 evenly balancing between AZ’s. Please let me know if my understanding is correct.

  14. I believe right answer to Q18 will be Autoscaling group. When you select AS group, it enables to drill down to instance details, stop instance and change instance type.

    Q18- You have decided to change the Instance type for instances running in your application tier that are using Auto Scaling. In which area below would you change the instance type definition?
    Auto Scaling launch configuration
    Auto Scaling group
    Auto Scaling policy
    Auto Scaling tags

    1. If you have not created Launch configuration, then yes you can do it from Auto scaling group.

  15. Hi Jayendra,
    For question:32 could you please advise if the answer should be “Entering Standby” rather than Pending. The question is emphasizing on “After leaving steady state what is the first state”.

    For AWS Auto Scaling, what is the first transition state an existing instance enters after leaving steady state in Standby mode?
    Detaching
    Terminating:Wait
    Pending (You can put any instance that is in an InService state into a Standby state. This enables you to remove the instance from service, troubleshoot or make changes to it, and then put it back into service. Instances in a Standby state continue to be managed by the Auto Scaling group. However, they are not an active part of your application until you put them back into service. Refer link)
    EnteringStandby

  16. Q27: believe C is also suitable, considering 5 instances regular. So Launch 15 instances with scale up or down based on the network activity. Can you please justify if this is not the case. Thanks

    1. Option C is 15 running always and 5 for scale up/down hence its not cost effective.

  17. Please could you advice for the below question.

    An application is running on Amazon EC2 instances behind an Application Load Balancer. The
    Instances run in an auto scaling group across multiple Availability Zones. Four instances are
    required to handle a predictable traffic load. The Solutions Architect wants to ensure that
    the operation is fault-tolerant up to the loss of one Availability Zone.

    Which is the MOST cost-efficient way to meet these requirements?

    Deploy two instances in each of three Availability Zones.
    Deploy two instances in each of two Availability Zones.
    Deploy four instanes in each of two Availability Zones.
    Deploy one instance in each of three Availability Zones.

    1. Cost effective option is Deploy two instances in each of three Availability Zones as it would need only 6 instances and even if an AZ goes down you would have 4 instance running

  18. Your advice on below question please

    Two Auto Scaling applications, Application A and Application B, currently run within a shared set of
    subnets. A solutions architect wants to make sure that ApplicationA can make request to Application
    B, but Application B should be denied from making request to Application A.

    1) Which is the SIMPLEST solution to achieve this policy?
    2) Using security groups that reference the security groups of the other application.
    3) Using security groups that reference the application servers IP address.
    4) Using Network Access Control Lists to allow/deny traffic based on application IP address.
    5) Migrating the applications to separate subnets from each other.

    1. Option 1 should work fine using Security groups for the instance launched by auto scaling.

      1. Wont the option 1 make the application B to talk back to Application A?

        So using the IP address specific to the servers would make App B not talk to App A isnt it?

  19. Q#4 , As per your below statement, it will error out if you try to change the health of the instance using CLIs. Isn’t it so ?

    For an unhealthy instance, the instance’s health check can be changed back to healthy manually but you will get an error if the instance is already terminating. Because the interval between marking an instance unhealthy and its actual termination is so small, attempting to set an instance’s health status back to healthy is probably useful only for a suspended group.

  20. Q#6. I think answer should be Option A.

    Auto Scaling launches a new instance first and then terminates the unhealthy instance.

    The scaling process launches new instances in an unaffected Availability Zone before terminating the unhealthy or unavailable instances. AZRebalance terminates instances only after launching the replacement instances.

    Could you please enlighten me how Option D would be the right answer.

    1. Hi Anil,

      ARRebalance happens when a scale in event happens. For an health check failure, it would terminate the instance and launch a new one.

      Refer AWS documentation – https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-maintain-instance-levels.html#replace-unhealthy-instance

      To maintain the same number of instances, Auto Scaling performs a periodic health check on running instances within an Auto Scaling group. When it finds that an instance is unhealthy, it terminates that instance and launches a new one.

  21. Question 33:
    > For AWS Auto Scaling, what is the first transition state an instance enters after leaving steady state when scaling in due to health check failure or decreased load?

    I cannot find any information in AWS docs that would confirm your answer “Terminating” is correct here – as per the asg lifecycle I would say that the correct answer should be “Pending” – I failed to find any evidence of going directly to Terminating upon leaving the Standby state when scale-in is active – according to my best understanding it first has to enter Pending state, then InService and then eventually Terminating state.
    If it indeed can go directly to “Terminating” state, please provide a link for docs that show that – otherwise I suggest double checking if Terminating is the right answer for this question.

Comments are closed.