AWS EC2 Dedicated Host vs Dedicated Instances

EC2 Dedicated Host vs Dedicated Instances

EC2 Dedicated Host vs Dedicated Instances

  • Each instance launched into a VPC has a tenancy attribute.
    • default
      • is the default option
      • instances run on shared hardware.
      • all instances launched would be shared, unless you explicitly specify a different tenancy during the instance launch.
    • dedicated
      • instance runs on single-tenant hardware.
      • all instances launched would be dedicated
      • can’t be changed to default after creation
    • host
      • instance runs on a Dedicated Host, which is an isolated server with configurations that you can control.
  • default tenancy can’t be changed to dedicatedor hostand vice versa. Changes reflect the next time when the instance starts.
  • dedicatedtenancy can be changed to hostand vice versa only for the stopped instance after launch.
  • Dedicated Hosts and Dedicated Instances can both be used to launch EC2 instances onto physical servers that are dedicated for your use.
  • There are no performance, security, or physical differences between Dedicated Instances and instances on Dedicated Hosts.

Dedicated Host vs Dedicated Instances

EC2 Dedicated Host vs Dedicated Instances

Dedicated Hosts

  • EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use.
  • provides Affinity that allows you to specify which Dedicated Host an instance will run on after it has been stopped and restarted.
  • Dedicated Hosts provide visibility and the option to control how you place your instances on a specific, physical server. This enables you to deploy instances using configurations that help address corporate compliance and regulatory requirements.
  • Dedicated Hosts allow using existing per-socket, per-core, or per-VM software licenses, including Windows Server, Microsoft SQL Server, SUSE, and Linux Enterprise Server.
  • Dedicated Host is also integrated with AWS License Manager, a service that helps you manage your software licenses, including Microsoft Windows Server and Microsoft SQL Server licenses.
  • RDS instances are not supported.
  • Dedicated Hosts cannot be launched in placement groups

Dedicated Instances

  • Dedicated Instances are EC2 instances that run in a VPC on hardware that’s dedicated to a single customer
  • Dedicated Instances are physically isolated at the host hardware level from the instances that aren’t Dedicated Instances and from instances that belong to other AWS accounts.
  • Dedicated Instances can be launched using
    • Create the VPC with the instance tenancy set to dedicated, all instances launched into this VPC are Dedicated Instances even if you mark the tenancy as shared.
    • Create the VPC with the instance tenancy set to default, and specify dedicated tenancy for any instances that should be Dedicated Instances when launched.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants its instances to run on single-tenant hardware with dedicated hardware for compliance reasons. Which value should they have to set the instance’s tenancy attribute to?
    1. Dedicated
    2. Isolated
    3. Default
    4. Reserved
  2. A company is performing migration from on-premises to AWS cloud. They have a compliance requirement for application hosting on physical servers to be able to use existing server-bound software licenses. Which AWS EC2 purchase type would help fulfill the requirement?
    1. Spot instances
    2. Reserved instances
    3. On-demand instances
    4. Dedicated Hosts

References

EC2_Dedicated_Hosts_vs_Dedicated_Instances

AWS RDS Proxy

RDS Proxy

AWS RDS Proxy

  • fully managed, highly available database proxy for RDS that makes applications more secure, scalable, more resilient to database failures.
  • allows apps to pool and share DB connections established with the database
  • improves database efficiency by reducing stress on the database resources (e.g. CPU, RAM) by minimizing open connections and creation of new connections.
  • is serverless and scales automatically to accommodate your workload.
  • is highly available and deployed across multiple Availability Zones.
  • increases resiliency to database failures by automatically connecting to a standby DB instance while preserving application connections.
  • reduces RDS and Aurora failover time by up to 66%.
  • protects the database against oversubscription by providing control over the number of database connections that are created.
  • queues or throttles application connections that can’t be served immediately from the pool of connections.
  • supports RDS (MySQL, PostgreSQL, MariaDB) and Aurora
  • is fully managed and there is no need to provision or manage any additional infrastructure.
  • required no code changes for most apps, just need to point to the RDS proxy endpoint instead of the RDS endpoint
  • enforce IAM Authentication for DB, and securely store credentials in AWS Secrets Manager
  • is never publicly accessible (must be accessed from VPC)

RDS Proxy

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company migrated one of its business-critical database workloads to an Amazon Aurora Multi-AZ DB cluster. The company requires a very low RTO and needs to improve the application recovery time after database failover. Which approach meets these requirements?
    1. Set the max_connections parameter to 16,000 in the instance-level parameter group.
    2. Modify the client connection timeout to 300 seconds.
    3. Create an Amazon RDS Proxy database proxy and update client connections to point to the proxy endpoint.
    4. Enable the query cache at the instance level.
  2. A company is running a serverless application on AWS Lambda that stores data in an Amazon RDS for MySQL DB instance. Usage has steadily increased, and recently there have been numerous “too many connections” errors when the Lambda function attempts to connect to the database. The company already has configured the database to use the maximum max_connections value that is possible. What should a SysOps administrator do to resolve these errors?
    1. Create a read replica of the database. Use Amazon Route 53 to create a weighted DNS record that contains both databases.
    2. Use Amazon RDS Proxy to create a proxy. Update the connection string in the Lambda function.
    3. Increase the value in the max_connect_errors parameter in the parameter group that the database uses.
    4. Update the Lambda function’s reserved concurrency to a higher value.

References

Amazon_RDS_Proxy

AWS EC2 Troubleshooting

AWS EC2 Troubleshooting

An Instance Immediately Terminates

  • EBS volume limit was reached. Its a soft limit and can be increased by submitting a support request
  • EBS snapshot is corrupt.
  • Root EBS volume is encrypted and you do not have permission to access the KMS key for decryption.
  • Instance store-backed AMI used to launch the instance is missing a required part
  • Resolution
    • Delete unused volumes
    • Ensure proper permissions to access the AWS keys.

EC2 Instance Connectivity Issues

  • Error connecting to your instance: Connection timed out
    • Route table, for the subnet, does not have a  route that sends all traffic destined outside the VPC to the Internet gateway for the VPC.
    • Security group does not allow inbound traffic from the public IP address on the proper port
    • ACL does not allow inbound traffic from and outbound traffic to the public IP address on the proper port
    • Private key used to connect does not match with key that corresponds to the key pair selected for the instance during the launch
    • Appropriate user name for the AMI is not used for e.g. user name for Amazon Linux AMI is ec2-user, Ubuntu AMI is ubuntu, RHEL5 AMI & SUSE Linux can be either root or ec2-user, Fedora AMI can be fedora or ec2-user
    • If connecting from a corporate network, the internal firewall does not
      allow inbound and outbound traffic on port 22 (for Linux instances) or port 3389 (for Windows instances).
    • Instance does not have the same public IP address, which changes during restarts. Associate an Elastic IP address with the instance
    • CPU load on the instance is high; the server may be overloaded.
  • User key not recognized by the server
    • private key file used to connect has not been converted to the format as required by the server
  • Host key not found, Permission denied (publickey), or Authentication failed, permission denied
    • appropriate user name for the AMI is not used for connecting
    • proper private key file for the instance is not used
  • Unprotected Private Key File
    • private key file is not protected from read and write operations from any other users.
  • Server refused our key or No supported authentication methods available
    • appropriate user name for the AMI is not used for connecting

Failed Status Checks

  • System Status CheckChecks Physical Hosts
    • Lost Network connectivity
    • Loss of System power
    • Software issues on the physical host
    • Hardware issues on the physical host
    • Resolution
      • Amazon EBS-backed AMI instance – stop and restart the instance
      • Instance-store backed AMI – terminate the instance and launch a replacement.
  • Instance Status Check – Checks Instance or VM
    • Possible reasons
      • Misconfigured networking or startup configuration
      • Exhausted memory
      • Corrupted file system
      • Failed Amazon EBS volume or Physical drive
      • Incompatible kernel
    • Resolution
      • Rebooting of the Instance or making modifications in your Operating system, volumes

Instance Capacity Issues

  • InsufficientInstanceCapacity
    • AWS does not currently have enough available capacity to service the request.
    • There is a limit to the number of instances of instance type that can be launched within a region.
    • Issue is mainly from the AWS side and it can be resolved by
      • reducing the request for the number of instances
      • changing the instance type
      • submitting a request without specifying the Availability Zone.
  • InstanceLimitExceeded
    • Concurrent running instance limit, default is 20, has been reached in a region.
    • Request an instance limit increase on a per-region basis

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A user has launched an EC2 instance. The instance got terminated as soon as it was launched. Which of the below mentioned options is not a possible reason for this?
    1. The user account has reached the maximum EC2 instance limit (Refer link)
    2. The snapshot is corrupt
    3. The AMI is missing. It is the required part
    4. The user account has reached the maximum volume limit
  2. If you’re unable to connect via SSH to your EC2 instance, which of the following should you check and possibly correct to restore connectivity?
    1. Adjust Security Group to permit egress traffic over TCP port 443 from your IP.
    2. Configure the IAM role to permit changes to security group settings.
    3. Modify the instance security group to allow ingress of ICMP packets from your IP.
    4. Adjust the instance’s Security Group to permit ingress traffic over port 22 from your IP
    5. Apply the most recently released Operating System security patches.
  3. You try to connect via SSH to a newly created Amazon EC2 instance and get one of the following error messages: “Network error: Connection timed out” or “Error connecting to [instance], reason: -> Connection timed out: connect,” You have confirmed that the network and security group rules are configured correctly and the instance is passing status checks. What steps should you take to identify the source of the behavior? Choose 2 answers
    1. Verify that the private key file corresponds to the Amazon EC2 key pair assigned at launch.
    2. Verify that your IAM user policy has permission to launch Amazon EC2 instances. (there is not need for a IAM user and just need ssh keys)
    3. Verify that you are connecting with the appropriate user name for your AMI. (Although it gives different error seems the only other logical choice)
    4. Verify that the Amazon EC2 Instance was launched with the proper IAM role. (role assigned to EC2 is irrelevant for ssh and only controls what AWS resources EC2 can access to)
    5. Verify that your federation trust to AWS has been established (federation is for authenticating the user)
  4. A user has launched an EBS backed EC2 instance in the us-east-1a region. The user stopped the instance and started it back after 20 days. AWS throws up an ‘Insufficient Instance Capacity’ error. What can be the possible reason for this?
    1. AWS does not have sufficient capacity in that availability zone
    2. AWS zone mapping is changed for that user account
    3. There is some issue with the host capacity on which the instance is launched
    4. The user account has reached the maximum EC2 instance limit
  5. A user is trying to connect to a running EC2 instance using SSH. However, the user gets an Unprotected Private Key File error. Which of the below mentioned options can be a possible reason for rejection?
    1. The private key file has the wrong file permission
    2. The ppk file used for SSH is read only
    3. The public key file has the wrong permission
    4. The user has provided the wrong user name for the OS login
  6. A user has launched an EC2 instance. However, due to some reason the instance was terminated. If the user wants to find out the reason for termination, where can he find the details?
    1. It is not possible to find the details after the instance is terminated
    2. The user can get information from the AWS console, by checking the Instance description under the State transition reason label
    3. The user can get information from the AWS console, by checking the Instance description under the Instance Status Change reason label
    4. The user can get information from the AWS console, by checking the Instance description under the Instance Termination reason label
  7. You have a Linux EC2 web server instance running inside a VPC. The instance is in a public subnet and has an EIP associated with it so you can connect to it over the Internet via HTTP or SSH. The instance was also fully accessible when you last logged in via SSH and was also serving web requests on port 80. Now you are not able to SSH into the host nor does it respond to web requests on port 80, that were working fine last time you checked. You have double-checked that all networking configuration parameters (security groups route tables, IGW, EIP. NACLs etc.) are properly configured and you haven’t made any changes to those anyway since you were last able to reach the Instance). You look at the EC2 console and notice that system status check shows “impaired.” Which should be your next step in troubleshooting and attempting to get the instance back to a healthy state so that you can log in again?
    1. Stop and start the instance so that it will be able to be redeployed on a healthy host system that most likely will fix the “impaired” system status (for system status check impaired status you need Stop Start for EBS and terminate and relaunch for Instance store)
    2. Reboot your instance so that the operating system will have a chance to boot in a clean healthy state that most likely will fix the ‘impaired” system status
    3. Add another dynamic private IP address to me instance and try to connect via that new path, since the networking stack of the OS may be locked up causing the “impaired” system status.
    4. Add another Elastic Network Interface to the instance and try to connect via that new path since the networking stack of the OS may be locked up causing the “impaired” system status
    5. un-map and then re-map the EIP to the instance, since the IGW/NAT gateway may not be working properly, causing the “impaired” system status
  8. A user is trying to connect to a running EC2 instance using SSH. However, the user gets a connection time out error. Which of the below mentioned options is not a possible reason for rejection?
    1. The access key to connect to the instance is wrong (access key is different from ssh private key)
    2. The security group is not configured properly
    3. The private key used to launch the instance is not correct
    4. The instance CPU is heavily loaded
  9. A user is trying to connect to a running EC2 instance using SSH. However, the user gets a Host key not found error. Which of the below mentioned options is a possible reason for rejection?
    1. The user has provided the wrong user name for the OS login
    2. The instance CPU is heavily loaded
    3. The security group is not configured properly
    4. The access key to connect to the instance is wrong (access key is different from ssh private key)

AWS EC2 – Placement Groups

EC2 Placement Groups

  • EC2 Placement groups determine how the instances are placed on the underlying hardware.
  • AWS now provides three types of placement groups
    • Cluster – clusters instances into a low-latency group in a single AZ
    • Partition – spreads instances across logical partitions, ensuring that instances in one partition do not share underlying hardware with instances in other partitions
    • Spread – strictly places a small group of instances across distinct underlying hardware to reduce correlated failures

Cluster Placement Groups

  • is a logical grouping of instances within a single Availability Zone
  • don’t span across Availability Zones
  • can span peered VPCs in the same Region
  • impacts High Availability as susceptible to hardware failures for the application
  • recommended for
    • applications that benefit from low network latency, high network throughput, or both.
    • when the majority of the network traffic is between the instances in the group.
  • To provide the lowest latency, and the highest packet-per-second network performance for the placement group, choose an instance type that supports enhanced networking
  • recommended to launch all group instances with the same instance type at the same time to ensure enough capacity
  • instances can be added later, but there are chances of encountering an insufficient capacity error
  • for moving an instance into the placement group,
    • create an AMI from the existing instance,
    • and then launch a new instance from the AMI into a placement group.
  • an instance still runs in the same placement group if stopped and started within the placement group.
  • in case of a capacity error, stop and start all of the instances in the placement group, and try the launch again. Restarting the instances may migrate them to hardware that has capacity for all requested instances
  • is only available within a single AZ either in the same VPC or peered VPCs
  • is more of a hint to AWS that the instances need to be launched physically close to each together
  • enables applications to participate in a low-latency, 10 Gbps network.

AWS EC2 Placement Group

Partition Placement Groups

  • is a group of instances spread across partitions i.e. group of instances spread across racks.
  • Partitions are logical groupings of instances, where contained instances do not share the same underlying hardware across different partitions.
  • EC2 divides each group into logical segments called partitions.
  • EC2 ensures that each partition within a placement group has its own set of racks. Each rack has its own network and power source.
  • No two partitions within a placement group share the same racks, allowing isolating the impact of a hardware failure within the application.
  • reduces the likelihood of correlated hardware failures for the application.
  • can have partitions in multiple Availability Zones in the same region
  • can have a maximum of seven partitions per Availability Zone
  • number of instances that can be launched into a partition placement group is limited only by the limits of the account.
  • can be used to spread deployment of large distributed and replicated workloads, such as HDFS, HBase, and Cassandra, across distinct hardware.
  • offer visibility into the partitions and the instances to partitions mapping can be seen. This information can be shared with topology-aware applications, such as HDFS, HBase, and Cassandra. These applications use this information to make intelligent data replication decisions for increasing data availability and durability.

Spread Placement Groups

  • is a group of instances that are each placed on distinct underlying hardware i.e. each instance on a distinct rack with each rack having its own network and power source.
  • recommended for applications that have a small number of critical instances that should be kept separate from each other.
  • reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware.
  • provide access to distinct hardware, and are therefore suitable for mixing instance types or launching instances over time.
  • can span multiple Availability Zones in the same region.
  • can have a maximum of seven running instances per AZ per group
  • maximum number of instances = 1 instance per rack * 7 racks * No. of AZs for e.g. in a Region with three AZs, a total of 21 instances in the group (seven per zone) can be launched
  • If the start or launch of an instance in a spread placement group fails cause of insufficient unique hardware to fulfil the request, the request can be tried later as EC2 makes more distinct hardware available over time

Placement Group Rules and Limitations

  • Ensure unique Placement group name within AWS account for the region.
  • Placement groups cannot be merged.
  • Instances cannot span multiple placement groups.
  • Instances with Dedicated Hosts cannot be launched in placement groups.
  • Instances with a tenancy of host cannot be launched in placement groups.
  • Cluster Placement groups
    • can’t span multiple Availability Zones.
    • supported by specific instance types which support 10 Gigabyte network
    • maximum network throughput speed of traffic between two instances in a cluster placement group is limited by the slower of the two instances, so choose the instance type properly.
    • can use up to 10 Gbps for single-flow traffic.
    • Traffic to and from S3 buckets within the same region over the public IP address space or through a VPC endpoint can use all available instance aggregate bandwidth.
    • recommended using the same instance type i.e. homogenous instance types. Although multiple instance types can be launched into a cluster placement group. However, this reduces the likelihood that the required capacity will be available for your launch to succeed.
    • Network traffic to the internet and over an AWS Direct Connect connection to on-premises resources is limited to 5 Gbps.
  • Partition placement groups
    • supports a maximum of seven partitions per Availability Zone
    • Dedicated Instances can have a maximum of two partitions
    • are not supported for Dedicated Hosts
    • are currently only available through the API or AWS CLI.
  • Spread placement groups
    • supports a maximum of seven running instances per Availability Zone for e.g., in a region that has three AZs, then a total of 21 running instances in the group (seven per zone).
    • are not supported for Dedicated Instances or Dedicated Hosts.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What is a cluster placement group?
    • A collection of Auto Scaling groups in the same Region
    • Feature that enables EC2 instances to interact with each other via high bandwidth, low latency connections
    • A collection of Elastic Load Balancers in the same Region or Availability Zone
    • A collection of authorized Cloud Front edge locations for a distribution
  2. In order to optimize performance for a compute cluster that requires low inter-node latency, which feature in the following list should you use?
    • AWS Direct Connect
    • Cluster Placement Groups
    • VPC private subnets
    • EC2 Dedicated Instances
    • Multiple Availability Zones
  3. What is required to achieve gigabit network throughput on EC2? You already selected cluster-compute, 10GB instances with enhanced networking, and your workload is already network-bound, but you are not seeing 10 gigabit speeds.
    1. Enable biplex networking on your servers, so packets are non-blocking in both directions and there’s no switching overhead.
    2. Ensure the instances are in different VPCs so you don’t saturate the Internet Gateway on any one VPC.
    3. Select PIOPS for your drives and mount several, so you can provision sufficient disk throughput
    4. Use a Cluster placement group for your instances so the instances are physically near each other in the same Availability Zone. (You are not guaranteed 10 gigabit performance, except within a placement group. Using placement groups enables applications to participate in a low-latency, 10 Gbps network)
  4. You need the absolute highest possible network performance for a cluster computing application. You already selected homogeneous instance types supporting 10 gigabit enhanced networking, made sure that your workload was network bound, and put the instances in a placement group. What is the last optimization you can make?
    1. Use 9001 MTU instead of 1500 for Jumbo Frames, to raise packet body to packet overhead ratios. (For instances that are collocated inside a placement group, jumbo frames help to achieve the maximum network throughput possible, and they are recommended in this case)
    2. Segregate the instances into different peered VPCs while keeping them all in a placement group, so each one has its own Internet Gateway.
    3. Bake an AMI for the instances and relaunch, so the instances are fresh in the placement group and do not have noisy neighbors
    4. Turn off SYN/ACK on your TCP stack or begin using UDP for higher throughput.

References

EC2_User_Guide – Placement_Groups

AWS EC2 Storage

EC2 Storage Overview

EC2 Storage Options - EBS, S3 & Instance Store

Storage Types

Elastic Block Store – EBS

  • Elastic Block Store – EBS provides highly available, reliable, durable, block-level storage volumes that can be attached to an EC2 instance.
  • persists independently from the running life of an instance.
  • behaves like a raw, unformatted, external block device that can be attached to a single EC2 instance at a time.
  • is recommended for data that requires frequent and granular updates e.g. running a database or filesystem.
  • is Zonal and can be attached to any instance within the same Availability Zone and can be used like any other physical hard drive.
  • is particularly well-suited for use as the primary storage for file systems, databases, or any applications that require fine granular updates and access to raw, unformatted, block-level storage.

Instance Store Storage

  • Instance store provides temporary or Ephemeral block-level storage
  • is located on the disks that are physically attached to the host computer.
  • consists of one or more instance store volumes exposed as block devices.
  • The size of an instance store varies by instance type.
  • Virtual devices for instance store volumes that are ephemeral[0-23], starting the first one as ephemeral0 and so on.
  • While an instance store is dedicated to a particular instance, the disk subsystem is shared among instances on a host computer.
  • is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
  • delivers very high random I/O performance and is a good option for storage with very low latency requirements, but you don’t need the data to persist when the instance terminates or you can take advantage of fault-tolerant architectures.

Amazon EBS vs Instance Store

More detailed @ Comparison of EBS vs Instance Store

Simple Storage Service – S3

More details @ AWS S3

Elastic File Store – EFS

  • Elastic File Store – EFS provides a simple, fully managed, easy-to-set-up, scalable, serverless, and cost-optimized file storage
  • can automatically scale from gigabytes to petabytes of data without needing to provision storage.
  • provides managed NFS (network file system) that can be mounted on and accessed by multiple EC2 in multiple AZs simultaneously.
  • offers highly durable, highly scalable, and highly available.
    • stores data redundantly across multiple AZs in the same region
    • grows and shrinks automatically as files are added and removed, so there is no need to manage storage procurement or provisioning.
  • supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol.
  • provides file system access semantics, such as strong data consistency and file locking.
  • is compatible with all Linux-based AMIs for EC2,  POSIX file system (~Linux) that has a standard file API.
  • is a shared POSIX system for Linux systems and does not work for Windows.
  • offers the ability to encrypt data at rest using KMS and in transit.
  • can be accessed from on-premises using an AWS Direct Connect or AWS VPN connection between the on-premises datacenter and VPC.
  • can be accessed concurrently from servers in the on-premises data center as well as EC2 instances in the VPC.

Block Device Mapping

  • A block device is a storage device that moves data in sequences of bytes or bits (blocks) and supports random access and generally use buffered I/O for e.g. hard disks, CD-ROM etc
  • Block devices can be physically attached to a computer (like an instance store volume) or can be accessed remotely as if it was attached (like an EBS volume)
  • Block device mapping defines the block devices to be attached to an instance, which can either be done while creation of an AMI or when an instance is launched
  • Block device must be mounted on the instance, after being attached to the instance, to be able to be accessed
  • When a block device is detached from an instance, it is unmounted by the operating system and you can no longer access the storage device.
  • Additional Instance store volumes can be attached only when the instance is launched while EBS volumes can be attached to a running instance.
  • Viewing the block device mapping for an instance only shows the EBS volumes and not the instance store volumes. Instance metadata can be used to query the complete block device mapping.

Public Data Sets

  • Amazon Web Services provides a repository of public data sets that can be seamlessly integrated into AWS cloud-based applications.
  • Amazon stores the data sets at no charge to the community and, as with all AWS services, you pay only for the compute and storage you use for your own applications.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. When you view the block device mapping for your instance, you can see only the EBS volumes, not the instance store volumes.
    1. Depends on the instance type
    2. FALSE
    3. Depends on whether you use API call
    4. TRUE
  1. Amazon EC2 provides a repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. What is the monthly charge for using the public data sets?
    1. A 1 time charge of 10$ for all the datasets.
    2. 1$ per dataset per month
    3. 10$ per month for all the datasets
    4. There is no charge for using the public data sets
  1. How many types of block devices does Amazon EC2 support?
    1. 2
    2. 4
    3. 3
    4. 1

References

AWS EC2 EBS Monitoring

EBS Monitoring

AWS support EBS monitoring by automatically providing data, such as  CloudWatch metrics and volume status checks to help monitor EBS volumes

CloudWatch Monitoring

  • CloudWatch metrics are statistical data that you can use to view, analyze, and set alarms on the operational behaviour of the EBS volumes
  • CloudWatch provides the below by default
    • Basic – Data, in 5-minute periods at no charge, which includes data from the root devices volumes for EBS backed instances
    • Detailed – Provisioned IOPS (SSD) volumes send one-minute metrics
  • EBS Metrics
    • VolumeReadBytes & VolumeWriteBytes
      • Provides information on the I/O operations in a specified period of time, in bytes
    • VolumeReadOps & VolumeWriteOps
      • Total number (count) of I/O operations in a specified period of time
    • VolumeTotalReadTime & VolumeTotalWriteTime
      • Total number of seconds spent by all operations that were completed in a specified period of time
    • VolumeIdleTime
      • Total number of seconds, in a specific period, when the volume was idle (no read and write operations)
    • VolumeQueueLength
      • Number of read and write operations, in a specific period, waiting to be completed
    • VolumeThroughputPercentage (Provisioned IOPS (SSD) volumes only)
      • Percentage of I/O operations per second (IOPS) delivered of the total IOPS provisioned
    • VolumeConsumedReadWriteOps (Provisioned IOPS (SSD) volumes only)
      • Total amount of read and write operations (normalized to 256K capacity units) consumed in a specified period of time

Volume Status Checks Monitoring

EC2 EBS Volume Status Check Monitoring

  • Volume status checks are automated tests that run every 5 minutes and return a pass or fail status.
  • Volume check status
    • Ok – all the status checks passed
    • Impaired – if the status checks failed
    • Insufficient-Data – checks are still in progress
    • Warning – the I/O performance of the volume is below expectations
  • When EBS determines the volume’s data is potentially inconsistent, it disables the I/O to the EBS volume from the attached EC2 instance to prevent any data corruption. This leads to the status check to fail and the volume status being impaired. Amazon waits for the I/O to be enabled, giving you an opportunity to perform consistency checks.
  • If the auto disabling of I/O is not needed, it can be overridden by enabling the Auto-Enabled IO flag, which would make the EBS volume auto-available immediately after the impaired status.
  • Events would be fired for notification whenever the I/O for an EBS volume is disabled
  • I/O performance status checks, applicable only for PIOPS (SSD) volumes, compare actual volume performance with the expected volume performance and alert if performing below expectations. Status check is performed every 1 min, however, is collected by CloudWatch every 5 mins.
  • While initializing Provisioned IOPS (SSD) volumes that were restored from snapshots, the performance of the volume may drop below 50 percent of its expected level, which causes the volume to display a warning state in the I/O Performance status check. This is expected and can be ignored.

EC2 EBS Volume Status

Volume Events Monitoring

  • EBS generates events for volume status checks
  • Each event includes a start time that indicates the time at which the event occurred and a duration that indicates how long I/O for the volume was disabled
  • Events description can be Awaiting Action (to enable I/O), IO enabled, IO Auto-Enabled, or whether the status check resulted in Normal, Degraded, Severely Degraded, or stalled status

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A user has configured CloudWatch monitoring on an EBS backed EC2 instance. If the user has not attached any additional device, which of the below mentioned metrics will always show a 0 value?
    1. DiskReadBytes
    2. NetworkIn
    3. NetworkOut
    4. CPUUtilization
  2. What does it mean if you have zero IOPS and a non-empty I/O queue for all EBS volumes attached to a running EC2 instance?
    1. The I/O queue is buffer flushing.
    2. Your EBS disk head(s) is/are seeking magnetic stripes.
    3. The EBS volume is unavailable. (EBS volumes are unavailable when all of the attached volumes perform zero read write IO, with pending IO in the queue Refer link)
    4. You need to re-mount the EBS volume in the OS.
  3. While performing the volume status checks, if the status is insufficient-data, what does it mean?
    1. checks may still be in progress on the volume
    2. check has passed
    3. check has failed

References

AWS Certified Security – Specialty (SCS-C01) Exam Learning Path

AWS Certified Security - Specialty SCS-C01 Certificate

AWS Certified Security – Specialty (SCS-C01) Exam Learning Path

I recently re-certified AWS Certified Security – Specialty (SCS-C01) after first clearing the same in 2019 and the format, and domains are pretty much the same however has been enhanced to cover all the latest services.

The AWS Certified Security – Specialty (SCS-C01) exam focuses on the AWS Security and Compliance concepts. It basically validates

  • An understanding of specialized data classifications and AWS data protection mechanisms.
  • An understanding of data-encryption methods and AWS mechanisms to implement them.
  • An understanding of secure Internet protocols and AWS mechanisms to implement them.
  • A working knowledge of AWS security services and features of services to provide a secure production environment.
  • Competency gained from two or more years of production deployment experience using AWS security services and features.
  • The ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements. An understanding of security operations and risks

Refer to AWS Certified Security – Speciality Exam Guide

AWS Certified Security – Speciality (SCS-C01) Exam Resources

AWS Certified Security – Specialty (SCS-C01) Exam Summary

  • AWS Certified Security – Specialty (SCS-C01) exam has 65 questions to be solved in 170 minutes and I made sure I utilized the complete time.
  • AWS Certified Security – Specialty (SCS-C01) exam focuses a lot on Security & Compliance concepts involving Data Encryption at rest or in transit, Data protection, Auditing, Compliance and regulatory requirements, and automated remediation.
  • Each question usually touches multiple AWS services.
  • Most of the questions and answers options have a lot of prose and a lot of reading that needs to be done, so be sure you are prepared and manage your time well.
  • As always, mark the questions for review and move on and come back to them after you are done with all.
  • As always, having a rough architecture or mental picture of the setup helps focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.

AWS Certified Security – Speciality (SCS-C01) Exam Summary

Security, Identity & Compliance

  • Identity and Access Management (IAM)
    • IAM Roles to grant the service, users temporary access to AWS services.
      • IAM Role can be used to give cross-account access and usually  involves creating a role within the trusting account with a trust and permission policy and granting the user in the trusted account permissions to assume the trusting account role.
    • Identity Providers & Federation to grant external user identity (SAML or Open ID compatible IdPs) permissions to AWS resources without having to be created within the AWS account.
    • IAM Policies help define who has access & what actions can they perform.
  • Deep dive into Key Management Service (KMS). There would be quite a few questions on this.
    • is a managed encryption service that allows the creation and control of encryption keys to enable data encryption. 
    • uses Envelope Encryption which uses a master key to encrypt the data key, which is then used to encrypt the data.
    • Understand how KMS works
    • Understand IAM Policies, Key Policies, Grants to grant access.
      • Key policies are the primary way to control access to KMS keys. Unless the key policy explicitly allows it, you cannot use IAM policies to allow access to a KMS key.
    • are regional, however, supports multi-region keys, which are KMS keys in different AWS Regions that can be used interchangeably – as though you had the same key in multiple Regions.
    • KMS Multi-region keys
      • are AWS KMS keys in different AWS Regions that can be used interchangeably – as though having the same key in multiple Regions.
      • are not global and each multi-region key needs to be replicated and managed independently.
    • Understand the difference between CMK with generated and imported key material esp. in rotating keys
    • KMS usage with VPC Endpoint which ensures the communication between the VPC and KMS is conducted entirely within the AWS network.
    • KMS ViaService condition
  • AWS GuardDuty
    • is a threat detection service that continuously monitors the AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.
    • supports CloudTrail S3 data events and management event logs, DNS logs, EKS audit logs, and VPC flow logs.
  • AWS Inspector
    • is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.
  • Amazon Macie
    • is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in S3.
  • AWS Artifact is a central resource for compliance-related information that provides on-demand access to AWS’ security and compliance reports and select online agreements
  • AWS Certificate Manager (ACM)
    • helps provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services
    • to use an ACM Certificate with CloudFront, the certificate must be imported into the US East (N. Virginia) region.
    • is regional and you need to request certificates in all regions and associate individually in all regions.
    • does not support EC2 instances and private keys cannot be exported.
  • Cloud HSM
    • is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud
  • AWS Secrets Manager
    • protects secrets needed to access applications, services, etc.
    • enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle
    • supports automatic rotation of credentials for RDS, DocumentDB, etc.
  • Secrets Manager vs Systems Manager Parameter Store
    • Secrets Manager supports automatic rotation while SSM Parameter Store does not
    • Parameter Store is cost-effective as compared to Secrets Manager.
  • AWS Shield & Shield Advanced
    • for DDoS protection and integrates with Route 53, CloudFront, ALB, and Global Accelerator.
  • AWS WAF
    • protects from common attack techniques like SQL injection and XSS, Conditions based include IP addresses, HTTP headers, HTTP body, and URI strings.
    • integrates with CloudFront, ALB, and API Gateway.
    • supports Web ACLs and can block traffic based on IPs, Rate limits, and specific countries as well
    • allows IP match set rule to allow/deny specific IP addresses and rate-based rule to limit the number of requests.
    • logs can be sent to the CloudWatch Logs log group, an S3 bucket, or Kinesis Data Firehose.
  • AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation.
  • AWS Network Firewall is a stateful, fully managed, network firewall and intrusion detection and prevention service (IDS/IPS) for VPCs.
  • AWS Resource Access Manager helps you securely share your resources across AWS accounts, within your organization or organizational units (OUs), and with IAM roles and users for supported resource types.
  • AWS Signer is a fully managed code-signing service to ensure the trust and integrity of your code.
  • AWS Audit Manager to map your compliance requirements to AWS usage data with prebuilt and custom frameworks and automated evidence collection.
  • AWS Cognito esp. User Pools

Networking & Content Delivery

  • Virtual Private Connect – VPC
    • Security Groups, NACLs
      • NACLs are stateless, Security groups are stateful
      • NACLs at subnet level, Security groups at the instance level
      • NACLs need to open ephemeral ports for response traffic.
    • VPC Gateway Endpoints to provide access to S3 and DynamoDB
    • VPC Interface Endpoints or PrivateLink provide access to a variety of services like SQS, Kinesis, or Private APIs exposed through NLB.
    • VPC Peering
      • to enable communication between VPCs within the same or different regions.
      • Route tables need to be configured on either VPC for them to be able to communicate.
      • does not allow cross-region security group reference.
    • VPC Flow Logs help capture information about the IP traffic going to and from network interfaces in the VPC
    • NAT Gateway provides managed NAT service that provides better availability, higher bandwidth, and requires less administrative effort.
  • Virtual Private Network – VPN & Direct Connect to establish connectivity a secured, low latency access between an on-premises data center and VPC.
    • IPSec VPN over Direct Connect to provide secure connectivity.
  • CloudFront 
    • integrates with S3 to improve latency, and performance.
    • provides multiple security features
    • supports encryption at rest and end-to-end encryption
      • Viewer Protocol Policy and Origin Protocol Policy to enforce HTTPS – can be configured to require that viewers use HTTPS to request the files so that connections are encrypted when CloudFront communicates with viewers.
      • Integrates with ACM and requires certs to be in the us-east-1 region
      • Underlying origin can be applied certs from ACM or issued by the third party.
    • CloudFront Origin Shield
      • helps improve the cache hit ratio and reduce the load on the origin.
      • requests from other regional caches would hit the Origin shield rather than the Origin.
      • should be placed at the regional cache and not in the edge cache
      • should be deployed to the region closer to the origin server
    • CloudFront provides Encryption at Rest
      • uses SSDs which are encrypted for edge location points of presence (POPs), and encrypted EBS volumes for Regional Edge Caches (RECs).
      • Function code and configuration are always stored in an encrypted format on the encrypted SSDs on the edge location POPs, and in other storage locations used by CloudFront.
    • Restricting access to content
  • Route 53
    • is a highly available and scalable DNS web service.
    • Resolver Query logging
      • logs the queries that originate in specified VPCs, on-premises resources that use inbound resolver or ones using outbound resolver as well as the responses to those DNS queries.
      • can be logged to CloudWatch logs, S3, and Kinesis Data Firehose
    • Route 53 DNSSEC secures DNS traffic, and helps protect a domain from DNS spoofing man-in-the-middle attacks. 
  • Elastic Load Balancer
    • End to End encryption
      • can be done NLB with TCP listener as pass through and terminating SSL on the EC2 instances
      • can be done with ALB with SSL termination and using HTTPS between ALB and EC2 instances
  • Gateway Load Balancer – GWLB
    • helps deploy, scale, and manage virtual appliances, such as firewalls, IDS/IPS systems, and deep packet inspection systems.

Management & Governance Tools

  • CloudWatch
  • CloudTrail for audit and governance
    • CloudTrail can be enabled for all regions at one go and supports log file integrity validation
    • With Organizations, the trail can be configured to log CloudTrail from all accounts to a central account.
  • AWS Config
    • AWS Config rules can be used to alert for any changes and Config can be used to check the history of changes. AWS Config can also help check approved AMIs compliance
    • allows you to remediate noncompliant resources using AWS Systems Manager Automation documents.
    • AWS Config -> EventBridge -> Lambda/SNS
  • CloudTrail vs Config
    • CloudTrail provides the WHO and Config provides the WHAT.
  • Systems Manager
    • Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management. Does not support secrets rotation. Use Secrets Manager instead
    • Systems Manager Patch Manager helps select and deploy the operating system and software patches automatically across large groups of EC2 or on-premises instances
    • Systems Manager Run Command provides safe, secure remote management of your instances at scale without logging into the servers, replacing the need for bastion hosts, SSH, or remote PowerShell
    • Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
  • AWS Organizations
    • is an account management service that enables consolidating multiple AWS accounts into an organization that can be managed centrally.
    • can configure Organization Trail to centrally log all CloudTrail logs.
    • Service Control Policies 
      • acts as guardrails and specify the services and actions that users and roles can use in the accounts that the SCP affects.
      • are similar to IAM permission policies except that they don’t grant any permissions.
  • AWS Trusted Advisor
    • inspects the AWS environment to make recommendations for system performance, saving money, availability, and closing security gaps
  • CloudFormation
    • Deletion Policy to prevent, retain, or backup RDS, EBS Volumes
    • Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update. Stack Policy only applies for Stack updates and not stack deletion.
  • Control Tower
    • to setup, govern, and secure a multi-account environment
    • strongly recommended guardrails cover EBS encryption

Storage & Databases

  • Simple Storage Service – S3
    • Undertstand S3 Security in detail
    • S3 Encryption supports both data at rest and data in transit encryption.
      • Data in transit encryption can be provided by enabling communication via SSL or using client-side encryption
      • Data at rest encryption can be provided using Server Side or Client Side encryption
      • Enforce S3 Encryption at Rest using default encryption of bucket policies
      • Enforce S3 encryption in transit using secureTransport in the S3 bucket policy
    • S3 permissions can be handled using
    • S3 Object Lock helps to store objects using a WORM model and can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
    • S3 Block Public Access provides controls across an entire AWS Account or at the individual S3 bucket level to ensure that objects never have public access, now and in the future.
    • S3 Access Points simplify data access for any AWS service or customer application that stores data in S3.
    • S3 Versioning with MFA Delete can be enabled on a bucket to ensure that data in the bucket cannot be accidentally overwritten or deleted.
    • S3 Access Analyzer monitors the access policies, ensuring that the policies provide only the intended access to your S3 resources.
  • EBS Encryption
  • Glacier Vault Lock
  • Relational Database Services – RDS
    • is a web service that makes it easier to set up, operate, and scale a relational database in the cloud.
    • supports the same encryption at rest methods as EBS
    • does not support enabling encryption after creation. Need to create a snapshot, copy the snapshot to an encrypted snapshot and restore it as an encrypted DB.

Compute

Integration Tools

  • Know how CloudWatch integration with SNS and Lambda can help in notification (Topics are not required to be in detail)

Whitepapers and articles

All the Best…

AWS EC2 Network Features

EC2 Network Features

EC2 Network covers a lot of features for low latency access, High Performance Computing, Enhanced Networking, etc.

EC2 and VPC

  • All the EC2 instance types can be launched in a VPC
  • Instance types C4, M4 & T2 are available in VPC only and cannot be launched in EC2-Classic
  • Launching an EC2 instance within a VPC provides the following benefits
    • Assign static private IP addresses to instances that persist across starts and stops
    • Assign multiple IP addresses to the instances
    • Define network interfaces, and attach one or more network interfaces to the instances
    • Change security group membership for the instances while they’re running
    • Control the outbound traffic from the instances (egress filtering) in addition to controlling the inbound traffic to them (ingress filtering)
    • Add an additional layer of access control to the instances in the form of network access control lists (ACL)
    • Run the instances on single-tenant dedicated hardware

EC2 Instance IP Addressing

  • Private IP address & Internal DNS Hostnames
    • Private IP address is the IP address that’s not reachable over the internet and can be resolved only within the network
    • When an instance is launched, the default network interface eth0 is assigned a private IP address and an internal DNS hostname, which resolves to the private IP address and can be used for communication between the instances in the same network only
    • Private IP address and DNS hostname cannot be resolved outside the network that the instance is in.
    • Private IP address behaviour
      • remains associated with the instance when it is stopped or rebooted
      • is disassociated only when the instance is terminated
    • An instance when launched can be assigned a private IP address or EC2 will automatically assign an IP address to the instance within the address range of the subnet
    • Additional private IP addresses, known as secondary private IP addresses can also be assigned. Unlike primary private IP addresses, secondary private IP addresses can be reassigned from one instance to another.
  • Public IP address and External DNS hostnames
    • A public IP address is reachable from the Internet
    • Each instance assigned a public IP address is also given an External DNS hostname.
    • External DNS hostname resolves to the public IP address outside the network and to the private IP address within the network.
    • Public IP address is associated with the primary Private IP address through NAT
    • Within a VPC, an instance may or may not be assigned a public IP address depending upon the subnet Assign Public IP attribute
    • Public IP address assigned to the pool is from the public IP address pool and is assigned to the instance, and not to the AWS account. It cannot be reused once disassociated and is released back to the pool
    • Public IP address behaviour
      • cannot be manually associated or disassociated with an instance
      • is released when an instance is stopped or terminated.
      • a new public IP address is assigned when a stopped instance is started
      • is released when an instance is assigned an Elastic IP address
      • is not assigned if there is more than one network interface attached to the instance
  • Multiple Private IP addresses
    • In EC2-VPC, multiple private IP addresses can be specified to the instances.
    • This can be useful in the following cases
      • Host multiple websites on a single server by using multiple SSL certificates on a single server and associating each certificate with a specific IP address.
      • Operate network appliances, such as firewalls or load balancers, that have multiple private IP addresses for each network interface.
      • Redirect internal traffic to a standby instance in case the instance fails, by reassigning the secondary private IP address to the standby instance.
    • Multiple IP addresses work with Network Interfaces
      • Secondary IP address can be assigned to any network interface, which can be attached or detached from an instance
      • Secondary IP address must be assigned from the CIDR block range of the subnet for the network interface
      • Security groups apply to network interfaces and not to IP addresses
      • Secondary private IP addresses can be assigned and unassigned to ENIs attached to running or stopped instances.
      • Secondary private IP addresses that are assigned to a network interface can be reassigned to another one if you explicitly allow it.
      • Primary private IP addresses, secondary private IP addresses, and any associated Elastic IP addresses remain with the network interface when it is detached from an instance or attached to another instance.
      • Although the primary network interface cannot be moved from an instance, the secondary private IP address of the primary network interface can be reassigned to another network interface.

Elastic IP Addresses

  • An Elastic IP address is a static IP address designed for dynamic cloud computing.
  • An elastic IP address can help mask the failure of an instance or software by rapidly remapping the address to another instance in the account.
  • The elastic IP address is associated with the AWS account and it remains associated with the account until released explicitly
  • An elastic IP address is NOT associated with a particular instance
  • When an instance is launched in the default VPC, it is assigned 2 IP addresses, a private and a public IP address, which are mapped to the private IP address through NAT
  • An instance launched in a non-default VPC is assigned only a private IP address unless a public address is specifically requested or the subnet public IP attribute is enabled
  • When an Elastic IP address is assigned to an instance, the public IP address is disassociated with the instance
  • For an instance, without a public IP address, to communicate to the internet it must be assigned an Elastic IP address
  • When the Elastic IP address is dissociated the public IP address is assigned back to the instance. However, if a secondary network interface is attached to the instance, the public IP address is not automatically assigned
  • Elastic IP addresses are not charged when associated with a running instance
  • Amazon imposes a small hourly fee for an unused Elastic IP address to ensure efficient use of Elastic IP addresses. So charges would be applied, if it is not associated or associated with an instance in a stopped state or associated with an unattached network interface.
  • All AWS accounts are limited to 5 EIPs (soft limit) because public (IPv4) Internet addresses are a scarce public resource

EC2 Classic, Default & Non Default Subnet Comparision

Elastic Network Interfaces (ENI)

  • Elastic Network Interfaces (ENIs) are virtual network interfaces that can be attached to the instances running in a VPC only
  • ENI consists of the following
    • A primary private IP address.
    • One or more secondary private IP addresses
    • One Elastic IP address per private IP address.
    • One public IP address, which can be auto-assigned to the elastic network interface for eth0 when an instance is launched, but only when an elastic network interface for eth0 is created instead of using an existing network interface
    • One or more security groups
    • A MAC address
    • A source/destination check flag
    • A description
  • ENI can be created without being attached to an instance
  • ENI can be attached to an instance, detached from that instance and attached to another instance. Attributes of an ENI like elastic IP address, private IP address follow the ENI and when moved from one instance to another instance & all traffic to the ENI will be routed to the new instance.
  • An instance in VPC always has a default primary ENI attached (eth0) with a private IP address assigned from the VPC range and cannot be detached
  • Additional ENI (eth1-ethn) can be attached to the instance and the number varies depending upon the instance type
  • Most important difference between eth0 and eth1 is that eth0 cannot be dynamically attached or detached from a running instance.
  • Primary ENIs (eth0) is created automatically when an EC2 instance is launched and are also deleted automatically when the instance is terminated unless the administrator has changed a property of the ENI to keep it alive afterwards.
  • Multiple elastic network interfaces are useful for use cases:
    • Create a management network
      • Primary ENI eth0 handles backend with more restrictive control
      • Secondary ENI eth1 handles the public facing traffic
    • Licensing authentication
      • Fixed MAC address associated with a license authentication
    • Use network and security appliances in your VPC
      • configure a third-party network and security appliances (load balancers, NAT, proxy) with the secondary ENI
    • Create dual-homed instances with workloads/roles on distinct subnets.
    • Create a low-budget, high-availability solution
      • If one of the instances serving a particular function fails, its elastic network interface can be attached to a replacement or hot standby instance pre-configured for the same role in order to rapidly recover the service
      • As the interface maintains its private IP, EIP, and MAC address, network traffic will begin flowing to the standby instance as soon as it is attached to the replacement instance
  • ENI Best Practices
    • ENI can be attached to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach).
    • Primary (eth0) interface can’t be detached
    • Secondary (ethN) ENI can be detached when the instance is running or stopped.
    • ENI in one subnet can be attached to an instance in another subnet, but the same AZ and same VPC
  • When launching an instance from the CLI or API, both the primary (eth0) and additional elastic network interfaces can be specified
  • Launching an Amazon Linux or Microsoft Windows Server instance with multiple network interfaces automatically configures interfaces, private IP addresses, and route tables on the operating system of the instance.
  • A warm or hot attach of an additional ENI may require bringing up the second interface manually, configure the private IP address, and modify the route table accordingly.
  • Instances running Amazon Linux or Microsoft Windows Server automatically recognize the warm or hot attach and configure themselves.
  • Attaching another ENI to an instance is not a method to increase or double the network bandwidth to or from the dual-homed instance.

Placement Groups

  • EC2 Placement groups determine how the instances are placed on the underlying hardware.
  • AWS now provides three types of placement groups
    • Cluster – clusters instances into a low-latency group in a single AZ
    • Partition – spreads instances across logical partitions, ensuring that instances in one partition do not share underlying hardware with instances in other partitions
    • Spread – strictly places a small group of instances across distinct underlying hardware to reduce correlated failures

Network Maximum Transmission Unit – MTU

  • MTU of a network connection is the size, in bytes, of the largest permissible packet that can be transferred over the connection.
  • The larger the MTU of the connection the more the data can be transferred in a single packet
  • Largest ethernet packet size supported over most of the internet is 1500 MTU
  • Jumbo Frames
    • Jumbo frames are Ethernet frames that allow more than 1500 bytes of data by increasing the payload size per packet and thus increasing the percentage of the packet that is not packet overhead.
    • Fewer packets are needed to send the same amount of usable data
    • Jumbo frames should be used with caution for Internet-bound traffic or any traffic that leaves a VPC.
    • Packets are fragmented by intermediate systems, which slows down this traffic.
  • Maximum supported MTU for an instance depends on its instance type
  • All EC2 instance types support 1500 MTU, and many current instance sizes support 9001 MTU or Jumbo frames
  • Traffic is limited to a maximum MTU of 1500 in the following cases:
    • Traffic outside of a given AWS Region for EC2-Classic
    • Traffic outside of a single VPC
    • Traffic over an inter-region VPC peering connection
    • Traffic over VPN connections
    • Traffic over an internet gateway
  • For instances that are collocated inside a placement group, jumbo frames help to achieve the maximum network throughput possible, and they are recommended in this case.

Enhanced Networking

  • Enhanced networking results in higher bandwidth, higher packet per second (PPS) performance, lower latency, consistency, scalability, and lower jitter.
  • EC2 provides enhanced networking capabilities using single root I/O virtualization (SR-IOV) only on supported instance types
    • SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization
  • It can be enabled for other OS distributions by installing the module with the correct attributes configured

Elastic Fabric Adapter – EFA

  • An Elastic Fabric Adapter (EFA) is a network device that can be attached to the EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications.
  • EFA helps achieve the application performance of an on-premises HPC cluster, with the scalability, flexibility, and elasticity provided by AWS.
  • EFA provides lower and more consistent latency and higher throughput than the TCP transport traditionally used in cloud-based HPC systems.
  • EFA enhances the performance of inter-instance communication which is critical for scaling HPC and machine learning applications.
  • EFA is optimized to work on the existing AWS network infrastructure and it can scale depending on application requirements.
  • EFAs provide all of the same traditional IP networking features as ENAs, and they also support OS-bypass capabilities. OS-bypass enables HPC and machine learning applications to bypass the operating system kernel and to communicate directly with the EFA device.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A user is launching an EC2 instance in the US East region. Which of the below mentioned options is recommended by AWS with respect to the selection of the availability zone?
    1. Always select the US-East-1-a zone for HA
    2. Do not select the AZ; instead let AWS select the AZ
    3. The user can never select the availability zone while launching an instance
    4. Always select the AZ while launching an instance
  2. You have multiple Amazon EC2 instances running in a cluster across multiple Availability Zones within the same region. What combination of the following should be used to ensure the highest network performance (packets per second), lowest latency, and lowest jitter? Choose 3 answers
    1. Amazon EC2 placement groups (would not work for multiple AZs. Defaults to Cluster)
    2. Enhanced networking (provides network performance, lowest latency)
    3. Amazon PV AMI (Needs HVM)
    4. Amazon HVM AMI
    5. Amazon Linux (Can work on other flavors of Unix as well)
    6. Amazon VPC (Enhanced networking works only in VPC)
  3. Regarding the attaching of ENI to an instance, what does ‘warm attach’ refer to?
    1. Attaching an ENI to an instance when it is stopped
    2. Attaching an ENI to an instance when it is running
    3. Attaching an ENI to an instance during the launch process
  4. Can I detach the primary (eth0) network interface when the instance is running or stopped?
    1. Yes, You can.
    2. You cannot
    3. Depends on the state of the interface at the time
  5. By default what are ENIs that are automatically created and attached to instances using the EC2 console set to do when the attached instance terminates?
    1. Remain as is
    2. Terminate
    3. Hibernate
    4. Pause
  6. Select the incorrect statement
    1. In Amazon EC2, the private IP addresses only returned to Amazon EC2 when the instance is stopped or terminated
    2. In Amazon VPC, an instance retains its private IP addresses when the instance is stopped.
    3. In Amazon VPC, an instance does NOT retain its private IP addresses when the instance is stopped
    4. In Amazon EC2, the private IP address is associated exclusively with the instance for its lifetime
  7. To ensure failover capabilities, consider using a _____ for incoming traffic on a network interface”.
    1. primary public IP
    2. secondary private IP
    3. secondary public IP
    4. add on secondary IP
  8. Which statements are true about Elastic Network Interface (ENI)? (Choose 2 answers)
    1. You can attach an ENI in one AZ to an instance in another AZ
    2. You can change the security group membership of an ENI
    3. You can attach an instance to tow different subnets within a VPC by using two ENIs
    4. You can attach an ENI in one VPC to an instance in another VPC
  9. A user is planning to host a web server as well as an app server on a single EC2 instance, which is a part of the public subnet of a VPC. How can the user setup to have two separate public IPs and separate security groups for both the application as well as the web server?
    1. Launch a VPC instance with two network interfaces. Assign a separate security group to each and AWS will assign a separate public IP to them. (AWS cannot assign public IPs for instance with multiple ENIs)
    2. Launch VPC with two separate subnets and make the instance a part of both the subnets.
    3. Launch a VPC instance with two network interfaces. Assign a separate security group and elastic IP to them.
    4. Launch a VPC with ELB such that it redirects requests to separate VPC instances of the public subnet.
  10. An organization has created multiple components of a single application for compartmentalization. Currently all the components are hosted on a single EC2 instance. Due to security reasons the organization wants to implement two separate SSLs for the separate modules although it is already using VPC. How can the organization achieve this with a single instance?
    1. Create a VPC instance, which will have both the ACL and the security group attached to it and have separate rules for each IP address.
    2. Create a VPC instance, which will have multiple network interfaces with multiple elastic IP addresses.
    3. You have to launch two instances each in a separate subnet and allow VPC peering for a single IP.
    4. Create a VPC instance, which will have multiple subnets attached to it and each will have a separate IP address.
  11. Your system automatically provisions EIPs to EC2 instances in a VPC on boot. The system provisions the whole VPC and stack at once. You have two of them per VPC. On your new AWS account, your attempt to create a Development environment failed, after successfully creating Staging and Production environments in the same region. What happened?
    1. You didn’t choose the Development version of the AMI you are using.
    2. You didn’t set the Development flag to true when deploying EC2 instances.
    3. You hit the soft limit of 5 EIPs per region and requested a 6th. (There is a soft limit of 5 EIPs per Region for VPC on new accounts. The third environment could not allocate the 6th EIP)
    4. You hit the soft limit of 2 VPCs per region and requested a 3rd.
  12. A user has created a VPC with a public subnet. The user has terminated all the instances, which are part of the subnet. Which of the below mentioned statements is true with respect to this scenario?
    1. The user cannot delete the VPC since the subnet is not deleted
    2. All network interface attached with the instances will be deleted
    3. When the user launches a new instance it cannot use the same subnet
    4. The subnet to which the instances were launched with will be deleted

References

AWS S3 Subresources

AWS S3 Subresources

  • S3 Subresources provides support to store, and manage the bucket configuration information.
  • S3 subresources only exist in the context of a specific bucket or object
  • S3 subresources are associated with buckets and objects.
  • S3 Subresources are subordinates to objects; i.e. they do not exist on their own, they are always associated with some other entity, such as an object or a bucket.
  • S3 supports various options to configure a bucket for e.g., the bucket can be configured for website hosting, configuration added to manage the lifecycle of objects in the bucket, and to log all access to the bucket.

S3 Object Lifecycle

Refer blog post @ S3 Object Lifecycle Management

Static Website Hosting

  • S3 can be used for Static Website hosting with Client-side scripts.
  • S3 does not support server-side scripting.
  • S3, in conjunction with Route 53, supports hosting a website at the root domain which can point to the S3 website endpoint
  • S3 website endpoints do not support HTTPS or access points
  • For S3 website hosting the content should be made publicly readable which can be provided using a bucket policy or an ACL on an object.
  • Users can configure the index, and error document as well as configure the conditional routing of an object name
  • Bucket policy applies only to objects owned by the bucket owner. If the bucket contains objects not owned by the bucket owner, then public READ permission on those objects should be granted using the object ACL.
  • Requester Pays buckets or DevPay buckets do not allow access through the website endpoint. Any request to such a bucket will receive a 403 -Access Denied response

S3 Versioning

Refer blog post @ S3 Object Versioning

Policy & Access Control List (ACL)

Refer blog post @ S3 Permissions

CORS (Cross Origin Resource Sharing)

  • All browsers implement the Same-Origin policy, for security reasons, where the web page from a domain can only request resources from the same domain.
  • CORS allows client web applications loaded in one domain access to the restricted resources to be requested from another domain.
  • With CORS support, S3 allows cross-origin access to S3 resources
  • CORS configuration rules identify the origins allowed to access the bucket, the operations (HTTP methods) that would be supported for each origin, and other operation-specific information.

S3 Access Logs

  • S3 Access Logs enable tracking access requests to an S3 bucket.
  • S3 Access logs are disabled by default.
  • Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and error code, etc.
  • Access log information can be useful in security and access audits and also help learn about the customer base and understand the S3 bill.
  • S3 periodically collects access log records, consolidates the records in log files, and then uploads log files to a target bucket as log objects.
  • Logging can be enabled on multiple source buckets with the same target bucket which will have access logs for all those source buckets, but each log object will report access log records for a specific source bucket.
  • Source and target buckets should be in the same region.
  • Source and target buckets should be different to avoid an infinite loop of logs issue.
  • Target bucket can be encrypted using SSS-S3 default encryption. However, Default encryption with AWS KMS keys (SSE-KMS) is not supported.
  • S3 Object Lock cannot be enabled on the target bucket.
  •  S3 uses a special log delivery account to write server access logs.
    • AWS recommends updating the bucket policy on the target bucket to grant access to the logging service principal (logging.s3.amazonaws.com) for access log delivery.
    • Access for access log delivery can also be granted to the S3 log delivery group through the bucket ACL. Granting access to the S3 log delivery group using your bucket ACL is not recommended.
  • Access log records are delivered on a best-effort basis. The completeness and timeliness of server logging is not guaranteed i.e. log record for a particular request might be delivered long after the request was actually processed, or it might not be delivered at all.
  • S3 Access Logs can be analyzed using data analysis tools or Athena.

Tagging

  • S3 provides the tagging subresource to store and manage tags on a bucket
  • Cost allocation tags can be added to the bucket to categorize and track AWS costs.
  • AWS can generate a cost allocation report with usage and costs aggregated by the tags applied to the buckets.

Location

  • AWS region needs to be specified during bucket creation and it cannot be changed.
  • S3 stores this information in the location subresource and provides an API for retrieving this information

Event Notifications

  • S3 notification feature enables notifications to be triggered when certain events happen in the bucket.
  • Notifications are enabled at the Bucket level
  • Notifications can be configured to be filtered by the prefix and suffix of the key name of objects. However, filtering rules cannot be defined with overlapping prefixes, overlapping suffixes, or prefix and suffix overlapping
  • S3 can publish the following events
    • New Object created events
      • Can be enabled for PUT, POST, or COPY operations
      • You will not receive event notifications from failed operations
    • Object Removal events
      • Can public delete events for object deletion, version object deletion or insertion of delete marker
      • You will not receive event notifications from automatic deletes from lifecycle policies or from failed operations.
    • Restore object events
      • restoration of objects archived to the S3 Glacier storage classes
    • Reduced Redundancy Storage (RRS) object lost events
      • Can be used to reproduce/recreate the Object
    • Replication events
      • for replication configurations that have S3 replication metrics or S3 Replication Time Control (S3 RTC) enabled
  • S3 can publish events to the following destination
  • For S3 to be able to publish events to the destination, the S3 principal should be granted the necessary permissions
  • S3 event notifications are designed to be delivered at least once. Typically, event notifications are delivered in seconds but can sometimes take a minute or longer.

Cross-Region Replication & Same-Region Replication

  • S3 Replication enables automatic, asynchronous copying of objects across S3 buckets in the same or different AWS regions.
  • S3 Cross-Region Replication – CRR is used to copy objects across S3 buckets in different AWS Regions.
  • S3 Same-Region Replication – SRR is used to copy objects across S3 buckets in the same AWS Regions.
  • S3 Replication helps to
    • Replicate objects while retaining metadata
    • Replicate objects into different storage classes
    • Maintain object copies under different ownership
    • Keep objects stored over multiple AWS Regions
    • Replicate objects within 15 minutes
  • S3 can replicate all or a subset of objects with specific key name prefixes
  • S3 encrypts all data in transit across AWS regions using SSL
  • Object replicas in the destination bucket are exact replicas of the objects in the source bucket with the same key names and the same metadata.
  • Objects may be replicated to a single destination bucket or multiple destination buckets.
  • Cross-Region Replication can be useful for the following scenarios:-
    • Compliance requirement to have data backed up across regions
    • Minimize latency to allow users across geography to access objects
    • Operational reasons compute clusters in two different regions that analyze the same set of objects
  • Same-Region Replication can be useful for the following scenarios:-
    • Aggregate logs into a single bucket
    • Configure live replication between production and test accounts
    • Abide by data sovereignty laws to store multiple copies
  • Replication Requirements
    • source and destination buckets must be versioning-enabled
    • for CRR, the source and destination buckets must be in different AWS regions.
    • S3 must have permission to replicate objects from that source bucket to the destination bucket on your behalf.
    • If the source bucket owner also owns the object, the bucket owner has full permission to replicate the object. If not, the source bucket owner must have permission for the S3 actions s3:GetObjectVersion and s3:GetObjectVersionACL to read the object and object ACL
    • Setting up cross-region replication in a cross-account scenario (where the source and destination buckets are owned by different AWS accounts), the source bucket owner must have permission to replicate objects in the destination bucket.
    • if the source bucket has S3 Object Lock enabled, the destination buckets must also have S3 Object Lock enabled.
    • destination buckets cannot be configured as Requester Pays buckets
  • Replicated & Not Replicated
    • Only new objects created after you add a replication configuration are replicated. S3 does NOT retroactively replicate objects that existed before you added replication configuration.
    • Only Objects created with SSE-S3 are replicated using server-side encryption using the S3-managed encryption key.
    • Objects created with server-side encryption using AWS KMS–managed encryption (SSE-KMS) keys are NOT replicated, by default. It requires additional handling.
    • S3 replicates only objects in the source bucket for which the bucket owner has permission to read objects and read ACLs
    • Any object ACL updates are replicated, although there can be some delay before S3 can bring the two in sync. This applies only to objects created after you add a replication configuration to the bucket.
    • S3 does NOT replicate objects in the source bucket for which the bucket owner does not have permission.
    • Updates to bucket-level S3 subresources are NOT replicated, allowing different bucket configurations on the source and destination buckets
    • Only customer actions are replicated & actions performed by lifecycle configuration are NOT replicated
    • Replication chaining is NOT allowed, Objects in the source bucket that are replicas, created by another replication, are NOT replicated.
    • Objects created with server-side encryption using either customer-provided (SSE-C) are NOT replicated.
    • S3 does NOT replicate the delete marker by default. However, you can add delete marker replication to non-tag-based rules to override it.
    • S3 does NOT replicate deletion by object version ID. This protects data from malicious deletions.

S3 Inventory

  • S3 Inventory helps manage the storage and can be used to audit and report on the replication and encryption status of the objects for business, compliance, and regulatory needs.
  • S3 inventory provides a scheduled alternative to the S3 synchronous List API operation.
  • S3 inventory provides CSV, ORC, or Apache Parquet output files that list the objects and their corresponding metadata on a daily or weekly basis for an S3 bucket or a shared prefix.

Requester Pays

  • By default, buckets are owned by the AWS account that created it (the bucket owner) and the AWS account pays for storage costs, downloads, and data transfer charges associated with the bucket.
  • Using Requester Pays subresource:-
    • Bucket owner specifies that the requester requesting the download will be charged for the download
    • However, the bucket owner still pays the storage costs
  • Enabling Requester Pays on a bucket
    • disables anonymous access to that bucket
    • does not support BitTorrent
    • does not support SOAP requests
    • cannot be enabled for end-user logging bucket

Torrent

  • Default distribution mechanism for S3 data is via client/server download
  • Bucket owner bears the cost of Storage as well as the request and transfer charges which can increase linearly for a popular object
  • S3 also supports the BitTorrent protocol
    • BitTorrent is an open-source Internet distribution protocol
    • BitTorrent addresses this problem by recruiting the very clients that are downloading the object as distributors themselves
    • S3 bandwidth rates are inexpensive, but BitTorrent allows developers to further save on bandwidth costs for a popular piece of data by letting users download from Amazon and other users simultaneously
  • Benefit for a publisher is that for large, popular files the amount of data actually supplied by S3 can be substantially lower than what it would have been serving the same clients via client/server download
  • Any object in S3 that is publicly available and can be read anonymously can be downloaded via BitTorrent
  • Torrent file can be retrieved for any publicly available object by simply adding a “?torrent” query string parameter at the end of the REST GET request for the object
  • Generating the .torrent for an object takes time proportional to the size of that object, so its recommended to make a first torrent request yourself to generate the file so that subsequent requests are faster
  • Torrent is enabled only for objects that are less than 5 GB in size.
  • Torrent subresource can only be retrieved, and cannot be created, updated, or deleted

Object ACL

Refer blog post @ S3 Permissions

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An organization’s security policy requires multiple copies of all critical data to be replicated across at least a primary and backup data center. The organization has decided to store some critical data on Amazon S3. Which option should you implement to ensure this requirement is met?
    1. Use the S3 copy API to replicate data between two S3 buckets in different regions
    2. You do not need to implement anything since S3 data is automatically replicated between regions
    3. Use the S3 copy API to replicate data between two S3 buckets in different facilities within an AWS Region
    4. You do not need to implement anything since S3 data is automatically replicated between multiple facilities within an AWS Region
  2. A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits. Which of the following will meet the Customer requirement?
    1. Enable AWS CloudTrail to audit all Amazon S3 bucket access.
    2. Enable server access logging for all required Amazon S3 buckets
    3. Enable the Requester Pays option to track access via AWS Billing
    4. Enable Amazon S3 event notifications for Put and Post.
  3. A user is enabling a static website hosting on an S3 bucket. Which of the below mentioned parameters cannot be configured by the user?
    1. Error document
    2. Conditional error on object name
    3. Index document
    4. Conditional redirection on object name
  4. Company ABCD is running their corporate website on Amazon S3 accessed from http//www.companyabcd.com. Their marketing team has published new web fonts to a separate S3 bucket accessed by the S3 endpoint: https://s3-us-west1.amazonaws.com/abcdfonts. While testing the new web fonts, Company ABCD recognized the web fonts are being blocked by the browser. What should Company ABCD do to prevent the web fonts from being blocked by the browser?
    1. Enable versioning on the abcdfonts bucket for each web font
    2. Create a policy on the abcdfonts bucket to enable access to everyone
    3. Add the Content-MD5 header to the request for webfonts in the abcdfonts bucket from the website
    4. Configure the abcdfonts bucket to allow cross-origin requests by creating a CORS configuration
  5. Company ABCD is currently hosting their corporate site in an Amazon S3 bucket with Static Website Hosting enabled. Currently, when visitors go to http://www.companyabcd.com the index.html page is returned. Company C now would like a new page welcome.html to be returned when a visitor enters http://www.companyabcd.com in the browser. Which of the following steps will allow Company ABCD to meet this requirement? Choose 2 answers.
    1. Upload an html page named welcome.html to their S3 bucket
    2. Create a welcome subfolder in their S3 bucket
    3. Set the Index Document property to welcome.html
    4. Move the index.html page to a welcome subfolder
    5. Set the Error Document property to welcome.html