AWS EC2 Instance Lifecycle

EC2 Instance Lifecycle Overview

  • EC2 instance lifecycle determines how an EC2 instance transitions through different states from the moment it is launched to its termination

EC2 Instance Lifecycle

Instance Launch

  •  Pending
    • When the instance is first launched is enters into the pending state
  • Running
    • After the instance is launched, it enters into the running state
    • Charges are incurred for each second, with a one-minute minimum, that the instance running is running, even if the instance remains idle

Instance Start & Stop (EBS-backed instances only)

  • Only an EBS-backed instance can be stopped and started.
  • Instance store-backed instance cannot be stopped and started.
  • An instance can be stopped & started in case the instance fails a status check or is not running as expected
  • Stop
    • After the instance is stopped, it enters in stopping state and then to stopped state.
    • Charges are only incurred for the EBS storage and not for the instance hourly charge or data transfer.
    • While the instance is stopped, its root volume can be treated like any other volume, and modified for e.g. repair file system problems or update software or change the instance type, user data, EBS optimization attributes, etc
    • Volume can be detached from the stopped instance, and attached to a running instance, modified, detached from the running instance, and then reattached to the stopped instance. It should be reattached using the storage device name that’s specified as the root device in the block device mapping for the instance.
  • Start
    • When the instance is started, it enters into pending state and then into running
    • An instance, when stopped and started, is launched on a new host
    • Any data on an instance store volume (not root volume) would be lost while data on the EBS volume persists
  • EC2 instance retains its private IP address as well as the Elastic IP address.
  • If the instance has an IPv6 address, it retains its IPv6 address.
  • However, the public IP address, if assigned instead of the Elastic IP address, would be released
  • For each transition of an instance from stopped to running, charges per second are incurred when the instance is running, with a minimum of one minute every time the instance is started

Instance Hibernate

  • Instance hibernation signals the operating system to perform hibernation (suspend-to-disk), which saves the contents from the instance memory (RAM) to the EBS root volume
  • Instance’s EBS root volume and any attached EBS data volumes are persisted, including the saved contents of the RAM.
  • Any EC2 instance store volumes remain attached to the instance, but the data on the instance store volumes is lost.
  • When the instance is restarted, the EBS root volume is restored to its previous state and the RAM contents are reloaded. Previously attached data volumes are reattached and the instance retains its instance ID.
  • After the instance is hibernated, it enters in stopping state and then to stopped state.
  • When the instance is restarted
    • It enters the pending state and the instance is moved to a new host computer (though in some cases, it remains on the current host).
    • EBS root volume is restored to its previous state
    • RAM contents are reloaded
    • Processes that were previously running on the instance are resumed
    • Previously attached data volumes are reattached and the instance retains its instance ID
    • Instance retains private IPv4 addresses and any IPv6 addresses
    • Instance retains its Elastic IP address
    • Instance releases its Public IPv4 address and would get a new one
  • Hibernation prerequisites
    • Supported instance families – C3, C4, C5, M3, M4, M5, R3, R4, R5, & T2
    • Instance RAM size – must be less than 150 GB.
    • Instance size – not supported for bare metal instances.
    • Supported AMIs must be an HVM AMI that supports hibernation
    • Root volume type – must be EBS volume and not instance store
    • EBS root volume size – must be large enough to store the RAM contents
    • EBS root volume MUST be encrypted to ensure the protection of sensitive content that is in memory at the time of hibernation
    • Enable hibernation at launch, as changing it is not supported on an existing instance
    • Purchasing options – Only On-Demand Instances and Reserved Instances supported
  • Limitations or Unsupported Actions
    • Changing the instance type or size of a hibernated instance
    • Creating snapshots or AMIs from hibernated instances or instances for which hibernation is enabled
    • the data on any instance store volumes is lost
    • can’t hibernate an instance that has more than 150 GB of RAM.
    • can’t hibernate an instance that is in an Auto Scaling group or used by ECS. If the instance is in an Auto Scaling group is hibernated, the EC2 Auto Scaling service marks the stopped instance as unhealthy, and may terminate it and launch a replacement instance.
    • An instance cannot be hibernated for more than 60 days.

Instance Reboot

  • Both EBS-backed and Instance store-backed instances can be rebooted
  • An instance remains on the same host computer and maintains its public DNS name, private IP address
  • Data on the EBS and Instance store volume is also retained
  • AWS recommends using EC2 to reboot the instance instead of running the operating system reboot command from the instance as it performs a hard reboot if the instance does not cleanly shut down within four minutes also creates an API record in CloudTrail if enabled.

Instance Retirement

  • An instance is scheduled to be retired when AWS detects an irreparable failure of the underlying hardware hosting the instance.
  • When an instance reaches its scheduled retirement date, it is stopped or terminated by AWS.
  • If the instance root device is an EBS volume, the instance is stopped and can be started again at any time.
  • If the instance root device is an instance store volume, the instance is terminated, and cannot be used again.

Instance Termination

  • An instance can be terminated, and it enters into the shutting-down, and then the terminated state
  • After an instance is terminated, it can’t be connected and no charges are incurred
  • Instance Shutdown behavior
    • Each EBS-backed instance supports the InstanceInitiatedShutdownBehavior attribute which determines whether the instance would be stopped or terminated when a shutdown command is initiated from the instance itself for e.g. shutdown, halt or poweroff command in linux
    • Default behavior for the instance to be stopped.
    • A shutdown command for an Instance store-backed instance will always terminate the instance
  • Termination protection
    • Termination protection ( DisableApiTermination attribute) can be enabled on the instance to prevent it from being accidentally terminated
    • DisableApiTermination from the Console, CLI or API.
    • Instance can be terminated through EC2 CLI.
    • Termination protection does not work for instances when
      • part of an Autoscaling group
      • launched as Spot instances
      • terminating an instance by initiating shutdown from the instance
  • Data persistence
    • EBS volume has a DeleteOnTermination  attribute which determines whether the volumes would be persisted or deleted when an instance they are associated with are terminated
    • Data on Instance store volume data does not persist
    • Default is to delete the root device volume and preserve any other EBS volumes. i.e.
      • Data on EBS root volumes have the DeleteOnTermination flag set to true and would be deleted by default
      • Additional EBS volumes attached have the DeleteOnTermination flag set to false are not deleted but just detached from the instance

EC2 Instance Lifecycle States and Billing

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What does Amazon EC2 provide?
    1. Virtual servers in the Cloud
    2. A platform to run code (Java, PHP, Python), paying on an hourly basis.
    3. Computer Clusters in the Cloud.
    4. Physical servers, remotely managed by the customer.
  2. A user has enabled termination protection on an EC2 instance. The user has also set Instance initiated shutdown behavior to terminate. When the user shuts down the instance from the OS, what will happen?
    1. The OS will shutdown but the instance will not be terminated due to protection
    2. It will terminate the instance
    3. It will not allow the user to shutdown the instance from the OS
    4. It is not possible to set the termination protection when an Instance initiated shutdown is set to Terminate
  3. A user has launched an EC2 instance and deployed a production application in it. The user wants to prohibit any mistakes from the production team to avoid accidental termination. How can the user achieve this?
    1. The user can the set DisableApiTermination attribute to avoid accidental termination
    2. It is not possible to avoid accidental termination
    3. The user can set the Deletion termination flag to avoid accidental termination
    4. The user can set the InstanceInitiatedShutdownBehavior flag to avoid accidental termination
  4. You have been doing a lot of testing of your VPC Network by deliberately failing EC2 instances to test whether instances are failing over properly. Your customer who will be paying the AWS bill for all this asks you if he being charged for all these instances. You try to explain to him how the billing works on EC2 instances to the best of your knowledge. What would be an appropriate response to give to the customer in regards to this?
    1. Billing commences when Amazon EC2 AMI instance is completely up and billing ends as soon as the instance starts to shutdown.
    2. Billing commences when Amazon EC2 initiates the boot sequence of an AMI instance and billing ends when the instance shuts down.
    3. Billing only commences only after 1 hour of uptime and billing ends when the instance terminates.
    4. Billing commences when Amazon EC2 initiates the boot sequence of an AMI instance and billing ends as soon as the instance starts to shutdown.

References

AWS S3 Object Lifecycle Management

S3 Lifecycle Management

S3 Object Lifecycle Management

  • S3 Object lifecycle can be managed by using a lifecycle configuration, which defines how S3 manages objects during their lifetime.
  • Lifecycle configuration enables simplification of object lifecycle management, for e.g. moving of less frequently access objects, backup or archival of data for several years, or permanent deletion of objects,
  • S3 controls all transitions automatically
  • Lifecycle Management rules applied to a bucket are applicable to all the existing objects in the bucket as well as the ones that will be added anew
  • S3 Object lifecycle management allows 2 types of behavior
    • Transition in which the storage class for the objects changes
    • Expiration where the objects expire and are permanently deleted
  • Lifecycle Management can be configured with Versioning, which allows storage of one current object version and zero or more non-current object versions
  • Object’s lifecycle management applies to both Non Versioning and Versioning enabled buckets
  • For Non Versioned buckets
    • Transitioning period is considered from the object’s creation date
  • For Versioned buckets,
    • Transitioning period for the current object is calculated for the object creation date
    • Transitioning period for a non-current object is calculated for the date when the object became a noncurrent versioned object
    • S3 uses the number of days since its successor was created as the number of days an object is noncurrent.
  • S3 calculates the time by adding the number of days specified in the rule to the object creation time and rounding the resulting time to the next day midnight UTC for e.g. if an object was created at 15/1/2016 10:30 AM UTC and you specify 3 days in a transition rule, which results in 18/1/2016 10:30 AM UTC and rounded of to next day midnight time 19/1/2016 00:00 UTC.
  • Lifecycle configuration on MFA-enabled buckets is not supported.
  • 1000 lifecycle rules can be configured per bucket

S3 Object Lifecycle Management Rules

S3 Lifecycle Management

Lifecycle Transitions Constraints

  1. STANDARD -> (128 KB & 30 days) -> STANDARD-IA or One Zone-IA or S3 Intelligent-Tiering
    • Larger Objects – Only objects with a size more than 128 KB can be transitioned, as cost benefits for transitioning to STANDARD-IA or One Zone-IA can be realized only for larger objects
    • Smaller Objects < 128 KB – S3 does not transition objects that are smaller than 128 KB
    • Minimum 30 days – Objects must be stored for at least 30 days in the current storage class before being transitioned to the STANDARD-IA or One Zone-IA, as younger objects are accessed more frequently or deleted sooner than is suitable for STANDARD-IA or One Zone-IA
  2. GLACIER -> (90 days) -> Permanent Deletion OR GLACIER Deep Archive -> (180 days) -> Permanent Deletion
    • Deleting data that is archived to Glacier is free if the objects deleted are archived for three months or longer.
    • S3 charges a prorated early deletion fee if the object is deleted or overwritten within three months of archiving it.
  3. Archival of objects to Glacier by using object lifecycle management is performed asynchronously and there may be a delay between the transition date in the lifecycle configuration rule and the date of the physical transition. However, AWS charges Glacier prices based on the transition date specified in the rule
  4. For a versioning-enabled bucket
    • Transition and Expiration actions apply to current versions.
    • NoncurrentVersionTransition and NoncurrentVersionExpiration actions apply to noncurrent versions and work similarly to the non-versioned objects except the time period is from the time the objects became noncurrent
  5. Expiration Rules
    • For Non Versioned bucket
      • Object is permanently deleted
    • For Versioned bucket
      • Expiration is applicable to the Current object only and does not impact any of the non-current objects
      • S3 will insert a Delete Marker object with a unique id and the previous current object becomes a non-current version
      • S3 will not take any action if the Current object is a Delete Marker
      • If the bucket has a single object which is the Delete Marker (referred to as expired object delete marker), S3 removes the Delete Marker
    • For Versioned Suspended bucket
      • S3 will insert a Delete Marker object with version ID null and overwrite any object with version ID null
  6. When an object reaches the end of its lifetime, S3 queues it for removal and removes it asynchronously. There may be a delay between the expiration date and the date at which S3 removes an object. Charged for storage time associated with an object that has expired are not incurred.
  7. Cost is incurred if objects are expired in STANDARD-IA before 30 days, GLACIER before 90 days, and GLACIER_DEEP_ARCHIVE before 180 days.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. If an object is stored in the Standard S3 storage class and you want to move it to Glacier, what must you do in order to properly migrate it?
    1. Change the storage class directly on the object.
    2. Delete the object and re-upload it, selecting Glacier as the storage class.
    3. None of the above.
    4. Create a lifecycle policy that will migrate it after a minimum of 30 days. (Any object uploaded to S3 must first be placed into either the Standard, Reduced Redundancy, or Infrequent Access storage class. Once in S3 the only way to move the object to glacier is through a lifecycle policy)
  2. A company wants to store their documents in AWS. Initially, these documents will be used frequently, and after a duration of 6 months, they would not be needed anymore. How would you architect this requirement?
    1. Store the files in Amazon EBS and create a Lifecycle Policy to remove the files after 6 months.
    2. Store the files in Amazon S3 and create a Lifecycle Policy to remove the files after 6 months.
    3. Store the files in Amazon Glacier and create a Lifecycle Policy to remove the files after 6 months.
    4. Store the files in Amazon EFS and create a Lifecycle Policy to remove the files after 6 months.
  3. Your firm has uploaded a large amount of aerial image data to S3. In the past, in your on-premises environment, you used a dedicated group of servers to oaten process this data and used Rabbit MQ, an open source messaging system, to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct?
    1. Use SQS for passing job messages, use Cloud Watch alarms to terminate EC2 worker instances when they become idle. Once data is processed, change the storage class of the S3 objects to Reduced Redundancy Storage (Need to replace On-Premises Tape functionality)
    2. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the S3 objects to Reduced Redundancy Storage (Need to replace On-Premises Tape functionality)
    3. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the S3 objects to Glacier (Glacier suitable for Tape backup)
    4. Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they become idle. Once data is processed, change the storage class of the S3 object to Glacier.
  4. You have a proprietary data store on-premises that must be backed up daily by dumping the data store contents to a single compressed 50GB file and sending the file to AWS. Your SLAs state that any dump file backed up within the past 7 days can be retrieved within 2 hours. Your compliance department has stated that all data must be held indefinitely. The time required to restore the data store from a backup is approximately 1 hour. Your on-premise network connection is capable of sustaining 1gbps to AWS. Which backup methods to AWS would be most cost-effective while still meeting all of your requirements?
    1. Send the daily backup files to Glacier immediately after being generated (will not meet the RTO)
    2. Transfer the daily backup files to an EBS volume in AWS and take daily snapshots of the volume (Not cost effective)
    3. Transfer the daily backup files to S3 and use appropriate bucket lifecycle policies to send to Glacier (Store in S3 for seven days and then archive to Glacier)
    4. Host the backup files on a Storage Gateway with Gateway-Cached Volumes and take daily snapshots (Not Cost-effective as local storage as well as S3 storage)

References

AWS_S3_Object_Lifecycle_Management