AWS Virtual Private Cloud – VPC

AWS VPC Components

AWS VPC – Virtual Private Cloud

  • AWS VPC – Virtual Private Cloud is a virtual network dedicated to the AWS account. It is logically isolated from other virtual networks in the AWS cloud.
  • VPC allows the users complete control over their virtual networking environment, including the selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
  • VPC allows you to use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications.
  • VPC is a regional service and it spans all of the AZs in the Region. Availability zones (AZ) are multiple, isolated locations within each Region.

  • VPC Sizing
    • VPC needs a set of IP addresses in the form of a Classless Inter-Domain Routing (CIDR) block for e.g, 10.0.0.0/16, which allows 2^16 (65536) IP address to be available 
    • Allowed CIDR block size is between
      • /28 netmask (minimum with 2^4 – 16 available IP address) and
      • /16 netmask (maximum with 2^16 – 65536 IP address)
    • CIDR block from private (non-publicly routable) IP address can be assigned
      • 10.0.0.0 – 10.255.255.255 (10/8 prefix)
      • 172.16.0.0 – 172.31.255.255 (172.16/12 prefix)
      • 192.168.0.0 – 192.168.255.255 (192.168/16 prefix)
    • It’s possible to specify a range of publicly routable IP addresses; however, direct access to the Internet is not currently supported from publicly routable CIDR blocks in a VPC
    • CIDR block once assigned to the VPC cannot be modified.  NOTE – You can now resize VPC. Read AWS blog post.
    • Each VPC is separate from any other VPC created with the same CIDR block even if it resides within the same AWS account
  • Connection between your VPC and corporate or home network can be established, however, the CIDR blocks should be not be overlapping for e.g. VPC with CIDR 10.0.0.0/16 can communicate with 10.1.0.0/16 corporate network but the connections would be dropped if it tries to connect to 10.0.37.0/16 corporate network cause of overlapping IP addresses.
  • VPC allows you to set tenancy options for the Instances launched in it. By default, the tenancy option is shared. If the dedicated option is selected, all the instances within it are launched on dedicated hardware overriding the individual instance tenancy setting.
  • Deletion of the VPC is possible only after terminating all instances within the VPC and deleting all the components with the VPC e.g. subnets, security groups, network ACLs, route tables, Internet gateways, VPC peering connections, and DHCP options
  • VPC Peering provides a networking connection between two VPCs (same or different account and region) that enables routing of traffic between them using private IPv4 addresses or IPv6 addresses.
  • NAT Gateway enables instances in a private subnet to connect to the Internet but prevents the Internet from initiating connections with the instances.
  • VPC endpoints enable the creation of a private connection between VPC to supported AWS services and VPC endpoint services powered by PrivateLink using its private IP address.
AWS VPC Components

Subnets

  • Subnet spans a single Availability Zone, distinct locations engineered to be isolated from failures in other AZs, and cannot span across AZs
  • Subnet can be configured with an Internet gateway to enable communication over the Internet, or virtual private gateway (VPN) connection to enable communication with your corporate network
  • Subnet can be Public or Private and it depends on whether it has Internet connectivity i.e. is able to route traffic to the Internet through the IGW
  • Instances within the Public Subnet should be assigned a Public IP or Elastic IP address to be able to communicate with the Internet
  • For Subnets not connected to the Internet, but has traffic routed through Virtual Private Gateway only is termed as VPN-only subnet
  • Subnets can be configured to Enable assignment of the Public IP address to all the Instances launched within the Subnet by default, which can be overridden during the creation of the Instance
  • Subnet Sizing
    • CIDR block assigned to the Subnet can be the same as the VPC CIDR, in this case you can launch only one subnet within your VPC
    • CIDR block assigned to the Subnet can be a subset of the VPC CIDR, which allows you to launch multiple subnets within the VPC
    • CIDR block assigned to the subnet should not be overlapping
    • CIDR block size allowed is between
      • /28 netmask (minimum with 2^4 – 16 available IP address) and
      • /16 netmask (maximum with 2^16 – 65536 IP address)
    • AWS reserves 5 IPs address (first 4 and last 1 IP address) in each Subnet which are not available for use and cannot be assigned to an instance. for e.g. for a Subnet with a CIDR block 10.0.0.0/24 the following five IPs are reserved
      • 10.0.0.0: Network address
      • 10.0.0.1: Reserved by AWS for the VPC router
      • 10.0.0.2: Reserved by AWS for mapping to Amazon-provided DNS
      • 10.0.0.3: Reserved by AWS for future use
      • 10.0.0.255: Network broadcast address. AWS does not support broadcast in a VPC, therefore the address is reserved.
  • Subnet Routing
    • Each Subnet is associated with a route table that controls the traffic.
  • Subnet Security
    • Subnet security can be configured using Security groups and NACLs
    • Security groups work at the instance level, and NACLs work at the subnet level

VPC & Subnet Sizing

  • VPC supports IPv4 and IPv6 addressing and has different CIDR block size limits for each
  • IPv6 CIDR block can be optionally associated with the VPC
  • VPC IPv4 CIDR block cannot be modified once created i.e. cannot increase or decrease the size of an existing CIDR block.
  • However, secondary CIDR blocks can be associated with the VPC to extend the VPC
  • Limitations
    • allowed block size is between a /28 netmask and /16 netmask.
    • CIDR block must not overlap with any existing CIDR block that’s associated with the VPC.
    • CIDR block must not be the same or larger than the CIDR range of a route in any of the VPC route tables for e.g. for a CIDR block 10.0.0.0/24, can only associate smaller CIDR blocks like 10.0.0.0/25

Secondary VPC Limitations

IP Addresses

Instances launched in the VPC can have Private, Public, and Elastic IP addresses assigned to them and are properties of ENI (Network Interfaces)

  • Private IP Addresses
    • Private IP addresses are not reachable over the Internet, and can be used for communication only between the instances within the VPC
    • All instances are assigned a private IP address, within the IP address range of the subnet, to the default network interface
    • Primary IP address is associated with the network interface for its lifetime, even when the instance is stopped and restarted and is released only when the instance is terminated
    • Additional Private IP addresses, known as secondary private IP address, can be assigned to the instances and these can be reassigned from one network interface to another
  • Public IP address
    • Public IP addresses are reachable over the Internet, and can be used for communication between instances and the Internet, or with other AWS services that have public endpoints
    • Public IP address assignment to the Instance depends if the Public IP Addressing is enabled for the Subnet.
    • Public IP address can also be assigned to the Instance by enabling the Public IP addressing during the creation of the instance, which overrides the subnet’s public IP addressing attribute
    • Public IP address is assigned from AWS pool of IP addresses and it is not associated with the AWS account and hence is released when the instance is stopped and restarted or terminated.
  • Elastic IP address
    • Elastic IP addresses are static, persistent public IP addresses that can be associated and disassociated with the instance, as required
    • Elastic IP address is allocated to the VPC and owned by the account unless released.
    • A Network Interface can be assigned either a Public IP or an Elastic IP. If you assign an instance, that already has a Public IP, an Elastic IP, the public IP is released
    • Elastic IP addresses can be moved from one instance to another, which can be within the same or different VPC within the same account
    • Elastic IPs are charged for non-usage i.e. if it is not associated or associated with a stopped instance or an unattached Network Interface

Elastic Network Interface (ENI)

  • Each Instance is attached to a default elastic network interface (Primary Network Interface eth0) and cannot be detached from the instance
  • ENI can include the following attributes
    • Primary private IP address
    • One or more secondary private IP addresses
    • One Elastic IP address per private IP address
    • One public IP address, which can be auto-assigned to the network interface for eth0 when you launch an instance, but only when you create a network interface for eth0 instead of using an existing ENI
    • One or more security groups
    • A MAC address
    • A source/destination check flag
    • A description
  • ENI’s attributes follow the ENI as it is attached or detached from an instance and reattached to another instance. When an ENI is moved from one instance to another, network traffic is redirected to the new instance.
  • Multiple ENIs can be attached to an instance and is useful for use cases:
    • Create a management network.
    • Use network and security appliances in your VPC.
    • Create dual-homed instances with workloads/roles on distinct subnets.
    • Create a low-budget, high-availability solution.

Route Tables

  • Route table defines rules, termed as routes, which determine where network traffic from the subnet would be routed
  • Each VPC has an implicit router to route network traffic
  • Each VPC has a Main Route table and can have multiple custom route tables created
  • Each Subnet within a VPC must be associated with a single route table at a time, while a route table can have multiple subnets associated with it
  • Subnet, if not explicitly associated to a route table, is implicitly associated with the main route table
  • Every route table contains a local route that enables communication within a VPC which cannot be modified or deleted
  • Route priority is decided by matching the most specific route in the route table that matches the traffic
  • Route tables need to be updated to define routes for Internet gateways, Virtual Private gateways, VPC Peering, VPC Endpoints, NAT Devices, etc.

Internet Gateways – IGW

  • An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in the VPC and the Internet.
  • IGW imposes no availability risks or bandwidth constraints on the network traffic.
  • An Internet gateway serves two purposes:
    • To provide a target in the VPC route tables for Internet-routable traffic,
    • To perform network address translation (NAT) for instances that have been NOT been assigned public IP addresses.
  • Enabling Internet access to an Instance requires
    • Attaching Internet gateway to the VPC
    • Subnet should have route tables associated with the route pointing to the Internet gateway
    • Instances should have a Public IP or Elastic IP address assigned
    • Security groups and NACLs associated with the Instance should allow relevant traffic

NAT

  • NAT device enables instances in a private subnet to connect to the Internet or other AWS services, but prevents the Internet from initiating connections with the instances.
  • NAT devices do not support IPv6 traffic, use an egress-only Internet gateway instead. 

Refer to My Blog Post about VPC NAT

Egress-only Internet gateway

  • Egress-only Internet gateway works as a NAT gateway, but for IPv6 traffic
  • Egress-only Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in the VPC to the Internet, and prevents the Internet from initiating an IPv6 connection with the instances.
  • An egress-only Internet gateway is for use with IPv6 traffic only. To enable outbound-only Internet communication over IPv4, use a NAT gateway instead.

Shared VPCs

  • VPC sharing allows multiple AWS accounts to create their application resources, such as EC2 instances, RDS databases, Redshift clusters, and AWS Lambda functions, into shared, centrally-managed VPCs.
  • In this model, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations.
  • After a subnet is shared, the participants can view, create, modify, and delete their application resources in the subnets shared with them. Participants cannot view, modify, or delete resources that belong to other participants or the VPC owner.

VPC Endpoints

  • VPC endpoint enables the creation of a private connection between VPC to supported AWS services and VPC endpoint services powered by PrivateLink using its private IP address
  • Endpoints do not require a public IP address, access over the Internet, NAT device, a VPN connection, or AWS Direct Connect.
  • Traffic between VPC and AWS service does not leave the Amazon network
  • Endpoints are virtual devices, that are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in the VPC and AWS services without imposing availability risks or bandwidth constraints on your network traffic.
  • Endpoints currently do not support cross-region requests, ensure that the endpoint is created in the same region as the S3 bucket
  • AWS currently supports the following types of Endpoints

Refer to My Blog Post about VPC Endpoint

VPC Peering

  • A VPC peering connection is a networking connection between two VPCs that enables the routing of traffic between them using private IPv4 addresses or IPv6 addresses.
  • VPC peering connection is a one-to-one relationship between two VPCs and can be established between your own VPCs, or with a VPC in another AWS account in the same or different region.
  • VPC peering helps instances in either VPC can communicate with each other as if they are within the same network using AWS’s existing infrastructure of a VPC to create a peering connection; it is neither a gateway nor a VPN connection and does not rely on a separate piece of physical hardware.
  • VPC peering does not have any separate charges. However, there are data transfer charges.

Refer to My Blog Post about VPC Peering

VPC VPN Connections

Refer to My Blog Post about AWS VPC VPN Connections

VPC Security

  • In a VPC, both Security Groups and Network ACLs (NACLS) together help to build a layered network defense.
  • Security groups – Act as a virtual firewall for associated instances, controlling both inbound and outbound traffic at the instance level
  • Network access control lists (NACLs) – Act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level

Security Groups & NACLs

Security Groups vs NACLs

Refer to My Blog Post about AWS Security Group vs NACLs

VPC Flow logs

  • VPC Flow Logs help capture information about the IP traffic going to and from network interfaces in the VPC and can help in monitoring the traffic or troubleshooting any connectivity issues.
  • Flow log data can be published to CloudWatch Logs, S3, and Kinesis Data Firehose.
  • Flow log can be created for the entire VPC, subnets, or each network interface. If enabled, for the entire VPC or subnet all the network interfaces within that resource are monitored.
  • Flow log can be configured to capture the type of traffic (accepted traffic, rejected traffic, or all traffic).
  • Flow logs do not capture real-time log streams for network interfaces.
  • Flow log data is collected outside of the path of the network traffic, and therefore does not affect network throughput or latency.
  • Flow logs can be created for network interfaces that are created by other AWS services; for e.g., ELB, RDS, ElastiCache, Redshift, and WorkSpaces.
  • Flow logs do not capture the following traffic
    • Traffic generated by instances when they contact the Amazon DNS server.
    • Traffic generated by a Windows instance for Amazon Windows license activation.
    • Traffic to and from 169.254.169.254 for instance metadata
    • Traffic to and from 169.254.169.123 for the Amazon Time Sync Service.
    • DHCP traffic.
    • Mirrored traffic.
    • Traffic to the reserved IP address for the default VPC router.
    • Traffic between an endpoint network interface and a Network Load Balancer network interface.
  • Troubleshooting traffic flow
    • If ACCEPT followed by REJECT, inbound was accepted by Security Groups and ACLs. However, rejected by NACLs outbound
    • If REJECT, inbound was either rejected by Security Groups OR NACLs.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You have a business-to-business web application running in a VPC consisting of an Elastic Load Balancer (ELB), web servers, application servers and a database. Your web application should only accept traffic from predefined customer IP addresses. Which two options meet this security requirement? Choose 2 answers
    1. Configure web server VPC security groups to allow traffic from your customers’ IPs (Web server is behind the ELB and customer IPs will never reach web servers)
    2. Configure your web servers to filter traffic based on the ELB’s “X-forwarded-for” header (get the customer IPs and create a custom filter to restrict access. Refer link)
    3. Configure ELB security groups to allow traffic from your customers’ IPs and deny all outbound traffic (ELB will see the customer IPs so can restrict access, deny all is basically have no rules in outbound traffic, implicit, and its stateful so would work)
    4. Configure a VPC NACL to allow web traffic from your customers’ IPs and deny all outbound traffic (NACL is stateless, deny all will not work)
  2. A user has created a VPC with public and private subnets using the VPC Wizard. The VPC has CIDR 20.0.0.0/16. The private subnet uses CIDR 20.0.0.0/24. Which of the below mentioned entries are required in the main route table to allow the instances in VPC to communicate with each other?
    1. Destination : 20.0.0.0/24 and Target : VPC
    2. Destination : 20.0.0.0/16 and Target : ALL
    3. Destination : 20.0.0.0/0 and Target : ALL
    4. Destination : 20.0.0.0/16 and Target : Local
  3. A user has created a VPC with two subnets: one public and one private. The user is planning to run the patch update for the instances in the private subnet. How can the instances in the private subnet connect to the internet?
    1. Use the internet gateway with a private IP
    2. Allow outbound traffic in the security group for port 80 to allow internet updates
    3. The private subnet can never connect to the internet
    4. Use NAT with an elastic IP
  4. A user has launched an EC2 instance and installed a website with the Apache webserver. The webserver is running but the user is not able to access the website from the Internet. What can be the possible reason for this failure?
    1. The security group of the instance is not configured properly.
    2. The instance is not configured with the proper key-pairs.
    3. The Apache website cannot be accessed from the Internet.
    4. Instance is not configured with an elastic IP.
  5. A user has created a VPC with public and private subnets using the VPC wizard. Which of the below mentioned statements is true in this scenario?
    1. AWS VPC will automatically create a NAT instance with the micro size
    2. VPC bounds the main route table with a private subnet and a custom route table with a public subnet
    3. User has to manually create a NAT instance
    4. VPC bounds the main route table with a public subnet and a custom route table with a private subnet
  6. A user has created a VPC with public and private subnets. The VPC has CIDR 20.0.0.0/16. The private subnet uses CIDR 20.0.1.0/24 and the public subnet uses CIDR 20.0.0.0/24. The user is planning to host a web server in the public subnet (port 80) and a DB server in the private subnet (port 3306). The user is configuring a security group of the NAT instance. Which of the below mentioned entries is not required for the NAT security group?
    1. For Inbound allow Source: 20.0.1.0/24 on port 80
    2. For Outbound allow Destination: 0.0.0.0/0 on port 80
    3. For Inbound allow Source: 20.0.0.0/24 on port 80
    4. For Outbound allow Destination: 0.0.0.0/0 on port 443
  7. A user has created a VPC with CIDR 20.0.0.0/24. The user has used all the IPs of CIDR and wants to increase the size of the VPC. The user has two subnets: public (20.0.0.0/25) and private (20.0.0.128/25). How can the user change the size of the VPC?
    1. The user can delete all the instances of the subnet. Change the size of the subnets to 20.0.0.0/32 and 20.0.1.0/32, respectively. Then the user can increase the size of the VPC using CLI
    2. It is not possible to change the size of the VPC once it has been created (NOTE – You can now increase the VPC size. Read Post)
    3. User can add a subnet with a higher range so that it will automatically increase the size of the VPC
    4. User can delete the subnets first and then modify the size of the VPC
  8. A user has created a VPC with the public and private subnets using the VPC wizard. The VPC has CIDR 20.0.0.0/16. The public subnet uses CIDR 20.0.1.0/24. The user is planning to host a web server in the public subnet (port 80) and a DB server in the private subnet (port 3306). The user is configuring a security group for the public subnet (WebSecGrp) and the private subnet (DBSecGrp). Which of the below mentioned entries is required in the web server security group (WebSecGrp)?
    1. Configure Destination as DB Security group ID (DbSecGrp) for port 3306 Outbound
    2. Configure port 80 for Destination 0.0.0.0/0 Outbound
    3. Configure port 3306 for source 20.0.0.0/24 InBound
    4. Configure port 80 InBound for source 20.0.0.0/16
  9. A user has created a VPC with CIDR 20.0.0.0/16. The user has created one subnet with CIDR 20.0.0.0/16 by mistake. The user is trying to create another subnet of CIDR 20.0.0.1/24. How can the user create the second subnet?
    1. There is no need to update the subnet as VPC automatically adjusts the CIDR of the first subnet based on the second subnet’s CIDR
    2. The user can modify the first subnet CIDR from the console
    3. It is not possible to create a second subnet as one subnet with the same CIDR as the VPC has been created
    4. The user can modify the first subnet CIDR with AWS CLI
  10. A user has setup a VPC with CIDR 20.0.0.0/16. The VPC has a private subnet (20.0.1.0/24) and a public subnet (20.0.0.0/24). The user’s data centre has CIDR of 20.0.54.0/24 and 20.1.0.0/24. If the private subnet wants to communicate with the data centre, what will happen?
    1. It will allow traffic communication on both the CIDRs of the data centre
    2. It will not allow traffic with data centre on CIDR 20.1.0.0/24 but allows traffic communication on 20.0.54.0/24
    3. It will not allow traffic communication on any of the data centre CIDRs
    4. It will allow traffic with data centre on CIDR 20.1.0.0/24 but does not allow on 20.0.54.0/24 (as the CIDR block would be overlapping)
  11. A user has created a VPC with public and private subnets using the VPC wizard. The VPC has CIDR 20.0.0.0/16. The private subnet uses CIDR 20.0.0.0/24 . The NAT instance ID is i-a12345. Which of the below mentioned entries are required in the main route table attached with the private subnet to allow instances to connect with the internet?
    1. Destination: 0.0.0.0/0 and Target: i-a12345
    2. Destination: 20.0.0.0/0 and Target: 80
    3. Destination: 20.0.0.0/0 and Target: i-a12345
    4. Destination: 20.0.0.0/24 and Target: i-a12345
  12. A user has created a VPC with CIDR 20.0.0.0/16 using the wizard. The user has created a public subnet CIDR (20.0.0.0/24) and VPN only subnets CIDR (20.0.1.0/24) along with the VPN gateway (vgw-12345) to connect to the user’s data centre. The user’s data centre has CIDR 172.28.0.0/12. The user has also setup a NAT instance (i-123456) to allow traffic to the internet from the VPN subnet. Which of the below mentioned options is not a valid entry for the main route table in this scenario?
    1. Destination: 20.0.1.0/24 and Target: i-12345
    2. Destination: 0.0.0.0/0 and Target: i-12345
    3. Destination: 172.28.0.0/12 and Target: vgw-12345
    4. Destination: 20.0.0.0/16 and Target: local
  13. A user has created a VPC with CIDR 20.0.0.0/16. The user has created one subnet with CIDR 20.0.0.0/16 in this VPC. The user is trying to create another subnet with the same VPC for CIDR 20.0.0.1/24. What will happen in this scenario?
    1. The VPC will modify the first subnet CIDR automatically to allow the second subnet IP range
    2. It is not possible to create a subnet with the same CIDR as VPC
    3. The second subnet will be created
    4. It will throw a CIDR overlaps error
  14. A user has created a VPC with CIDR 20.0.0.0/16 using the wizard. The user has created both Public and VPN-Only subnets along with hardware VPN access to connect to the user’s data centre. The user has not yet launched any instance as well as modified or deleted any setup. He wants to delete this VPC from the console. Will the console allow the user to delete the VPC?
    1. Yes, the console will delete all the setups and also delete the virtual private gateway
    2. No, the console will ask the user to manually detach the virtual private gateway first and then allow deleting the VPC
    3. Yes, the console will delete all the setups and detach the virtual private gateway
    4. No, since the NAT instance is running
  15. A user has created a VPC with the public and private subnets using the VPC wizard. The VPC has CIDR 20.0.0.0/16. The public subnet uses CIDR 20.0.1.0/24. The user is planning to host a web server in the public subnet (port 80) and a DB server in the private subnet (port 3306). The user is configuring a security group for the public subnet (WebSecGrp) and the private subnet (DBSecGrp). Which of the below mentioned entries is required in the private subnet database security group (DBSecGrp)?
    1. Allow Inbound on port 3306 for Source Web Server Security Group (WebSecGrp)
    2. Allow Inbound on port 3306 from source 20.0.0.0/16
    3. Allow Outbound on port 3306 for Destination Web Server Security Group (WebSecGrp.
    4. Allow Outbound on port 80 for Destination NAT Instance IP
  16. A user has created a VPC with a subnet and a security group. The user has launched an instance in that subnet and attached a public IP. The user is still unable to connect to the instance. The internet gateway has also been created. What can be the reason for the error?
    1. The internet gateway is not configured with the route table
    2. The private IP is not present
    3. The outbound traffic on the security group is disabled
    4. The internet gateway is not configured with the security group
  17. A user has created a subnet in VPC and launched an EC2 instance within it. The user has not selected the option to assign the IP address while launching the instance. Which of the below mentioned statements is true with respect to the Instance requiring access to the Internet?
    1. The instance will always have a public DNS attached to the instance by default
    2. The user can directly attach an elastic IP to the instance
    3. The instance will never launch if the public IP is not assigned
    4. The user would need to create an internet gateway and then attach an elastic IP to the instance to connect from internet
  18. A user has created a VPC with public and private subnets using the VPC wizard. Which of the below mentioned statements is not true in this scenario?
    1. VPC will create a routing instance and attach it with a public subnet
    2. VPC will create two subnets
    3. VPC will create one internet gateway and attach it to VPC
    4. VPC will launch one NAT instance with an elastic IP
  19. A user has created a VPC with the public subnet. The user has created a security group for that VPC. Which of the below mentioned statements is true when a security group is created?
    1. It can connect to the AWS services, such as S3 and RDS by default
    2. It will have all the inbound traffic by default
    3. It will have all the outbound traffic by default
    4. It will by default allow traffic to the internet gateway
  20. A user has created a VPC with CIDR 20.0.0.0/16 using VPC Wizard. The user has created a public CIDR (20.0.0.0/24) and a VPN only subnet CIDR (20.0.1.0/24) along with the hardware VPN access to connect to the user’s data centre. Which of the below mentioned components is not present when the VPC is setup with the wizard?
    1. Main route table attached with a VPN only subnet
    2. A NAT instance configured to allow the VPN subnet instances to connect with the internet
    3. Custom route table attached with a public subnet
    4. An internet gateway for a public subnet
  21. A user has created a VPC with public and private subnets using the VPC wizard. The user has not launched any instance manually and is trying to delete the VPC. What will happen in this scenario?
    1. It will not allow to delete the VPC as it has subnets with route tables
    2. It will not allow to delete the VPC since it has a running route instance
    3. It will terminate the VPC along with all the instances launched by the wizard
    4. It will not allow to delete the VPC since it has a running NAT instance
  22. A user has created a public subnet with VPC and launched an EC2 instance within it. The user is trying to delete the subnet. What will happen in this scenario?
    1. It will delete the subnet and make the EC2 instance as a part of the default subnet
    2. It will not allow the user to delete the subnet until the instances are terminated
    3. It will delete the subnet as well as terminate the instances
    4. Subnet can never be deleted independently, but the user has to delete the VPC first
  23. A user has created a VPC with CIDR 20.0.0.0/24. The user has created a public subnet with CIDR 20.0.0.0/25 and a private subnet with CIDR 20.0.0.128/25. The user has launched one instance each in the private and public subnets. Which of the below mentioned options cannot be the correct IP address (private IP) assigned to an instance in the public or private subnet?
    1. 20.0.0.255
    2. 20.0.0.132
    3. 20.0.0.122
    4. 20.0.0.55
  24. A user has created a VPC with CIDR 20.0.0.0/16. The user has created public and VPN only subnets along with hardware VPN access to connect to the user’s datacenter. The user wants to make so that all traffic coming to the public subnet follows the organization’s proxy policy. How can the user make this happen?
    1. Setting up a NAT with the proxy protocol and configure that the public subnet receives traffic from NAT
    2. Setting up a proxy policy in the internet gateway connected with the public subnet
    3. It is not possible to setup the proxy policy for a public subnet
    4. Setting the route table and security group of the public subnet which receives traffic from a virtual private gateway
  25. A user has created a VPC with CIDR 20.0.0.0/16 using the wizard. The user has created a public subnet CIDR (20.0.0.0/24) and VPN only subnets CIDR (20.0.1.0/24) along with the VPN gateway (vgw-12345) to connect to the user’s data centre. Which of the below mentioned options is a valid entry for the main route table in this scenario?
    1. Destination: 20.0.0.0/24 and Target: vgw-12345
    2. Destination: 20.0.0.0/16 and Target: ALL
    3. Destination: 20.0.1.0/16 and Target: vgw-12345
    4. Destination: 0.0.0.0/0 and Target: vgw-12345
  26. Which two components provide connectivity with external networks? When attached to an Amazon VPC which two components provide connectivity with external networks? Choose 2 answers
    1. Elastic IPs (EIP) (Does not provide connectivity, public IP address will do as well)
    2. NAT Gateway (NAT) (Not Attached to VPC and still needs IGW)
    3. Internet Gateway (IGW)
    4. Virtual Private Gateway (VGW)
  27. You are attempting to connect to an instance in Amazon VPC without success You have already verified that the VPC has an Internet Gateway (IGW) the instance has an associated Elastic IP (EIP) and correct security group rules are in place. Which VPC component should you evaluate next?
    1. The configuration of a NAT instance
    2. The configuration of the Routing Table
    3. The configuration of the internet Gateway (IGW)
    4. The configuration of SRC/DST checking
  28. If you want to launch Amazon Elastic Compute Cloud (EC2) Instances and assign each Instance a predetermined private IP address you should:
    1. Assign a group or sequential Elastic IP address to the instances
    2. Launch the instances in a Placement Group
    3. Launch the instances in the Amazon virtual Private Cloud (VPC)
    4. Use standard EC2 instances since each instance gets a private Domain Name Service (DNS) already
    5. Launch the Instance from a private Amazon Machine image (AMI)
  29. A user has recently started using EC2. The user launched one EC2 instance in the default subnet in EC2-VPC Which of the below mentioned options is not attached or available with the EC2 instance when it is launched?
    1. Public IP address
    2. Internet gateway
    3. Elastic IP
    4. Private IP address
  30. A user has created a VPC with CIDR 20.0.0.0/24. The user has created a public subnet with CIDR 20.0.0.0/25. The user is trying to create the private subnet with CIDR 20.0.0.128/25. Which of the below mentioned statements is true in this scenario?
    1. It will not allow the user to create the private subnet due to a CIDR overlap
    2. It will allow the user to create a private subnet with CIDR as 20.0.0.128/25
    3. This statement is wrong as AWS does not allow CIDR 20.0.0.0/25
    4. It will not allow the user to create a private subnet due to a wrong CIDR range
  31. A user has created a VPC with CIDR 20.0.0.0/16 with only a private subnet and VPN connection using the VPC wizard. The user wants to connect to the instance in a private subnet over SSH. How should the user define the security rule for SSH?
    1. Allow Inbound traffic on port 22 from the user’s network
    2. The user has to create an instance in EC2 Classic with an elastic IP and configure the security group of a private subnet to allow SSH from that elastic IP
    3. The user can connect to a instance in a private subnet using the NAT instance
    4. Allow Inbound traffic on port 80 and 22 to allow the user to connect to a private subnet over the Internet
  32. A company wants to implement their website in a virtual private cloud (VPC). The web tier will use an Auto Scaling group across multiple Availability Zones (AZs). The database will use Multi-AZ RDS MySQL and should not be publicly accessible. What is the minimum number of subnets that need to be configured in the VPC?
    1. 1
    2. 2
    3. 3
    4. 4 (2 public subnets for web instances in multiple AZs and 2 private subnets for RDS Multi-AZ)
  33. Which of the following are characteristics of Amazon VPC subnets? Choose 2 answers
    1. Each subnet maps to a single Availability Zone
    2. A CIDR block mask of /25 is the smallest range supported
    3. Instances in a private subnet can communicate with the Internet only if they have an Elastic IP.
    4. By default, all subnets can route between each other, whether they are private or public
    5. Each subnet spans at least 2 Availability zones to provide a high-availability environment
  34. You need to design a VPC for a web-application consisting of an Elastic Load Balancer (ELB). a fleet of web/application servers, and an RDS database The entire Infrastructure must be distributed over 2 availability zones. Which VPC configuration works while assuring the database is not available from the Internet?
    1. One public subnet for ELB one public subnet for the web-servers, and one private subnet for the database
    2. One public subnet for ELB two private subnets for the web-servers, two private subnets for RDS
    3. Two public subnets for ELB two private subnets for the web-servers and two private subnets for RDS
    4. Two public subnets for ELB two public subnets for the web-servers, and two public subnets for RDS
  35. You have deployed a three-tier web application in a VPC with a CIDR block of 10.0.0.0/28. You initially deploy two web servers, two application servers, two database servers and one NAT instance tor a total of seven EC2 instances. The web, application and database servers are deployed across two availability zones (AZs). You also deploy an ELB in front of the two web servers, and use Route53 for DNS Web traffic gradually increases in the first few days following the deployment, so you attempt to double the number of instances in each tier of the application to handle the new load unfortunately some of these new instances fail to launch. Which of the following could the root caused? (Choose 2 answers) [PROFESSIONAL]
    1. The Internet Gateway (IGW) of your VPC has scaled-up adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches.
    2. AWS reserves one IP address in each subnet’s CIDR block for Route53 so you do not have enough addresses left to launch all of the new EC2 instances.
    3. AWS reserves the first and the last private IP address in each subnet’s CIDR block so you do not have enough addresses left to launch all of the new EC2 instances.
    4. The ELB has scaled-up. Adding more instances to handle the traffic reducing the number of available private IP addresses for new instance launches
    5. AWS reserves the first four and the last IP address in each subnet’s CIDR block so you do not have enough addresses left to launch all of the new EC2 instances.
  36. A user wants to access RDS from an EC2 instance using IP addresses. Both RDS and EC2 are in the same region, but different AZs. Which of the below mentioned options help configure that the instance is accessed faster?
    1. Configure the Private IP of the Instance in RDS security group (Recommended as the data is transferred within the the Amazon network and not through internet – Refer link)
    2. Security group of EC2 allowed in the RDS security group
    3. Configuring the elastic IP of the instance in RDS security group
    4. Configure the Public IP of the instance in RDS security group
  37. In regards to VPC, select the correct statement:
    1. You can associate multiple subnets with the same Route Table.
    2. You can associate multiple subnets with the same Route Table, but you can’t associate a subnet with only one Route Table.
    3. You can’t associate multiple subnets with the same Route Table.
    4. None of these.
  38. You need to design a VPC for a web-application consisting of an ELB a fleet of web application servers, and an RDS DB. The entire infrastructure must be distributed over 2 AZ. Which VPC configuration works while assuring the DB is not available from the Internet?
    1. One Public Subnet for ELB, one Public Subnet for the web-servers, and one private subnet for the DB
    2. One Public Subnet for ELB, two Private Subnets for the web-servers, and two private subnets for the RDS
    3. Two Public Subnets for ELB, two private Subnet for the web-servers, and two private subnet for the RDS
    4. Two Public Subnets for ELB, two Public Subnet for the web-servers, and two public subnets for the RDS
  39. You have an Amazon VPC with one private subnet and one public subnet with a Network Address Translator (NAT) server. You are creating a group of Amazon Elastic Cloud Compute (EC2) instances that configure themselves at startup via downloading a bootstrapping script from Amazon Simple Storage Service (S3) that deploys an application via GIT. Which setup provides the highest level of security?
    1. Amazon EC2 instances in private subnet, no EIPs, route outgoing traffic via the NAT
    2. Amazon EC2 instances in public subnet, no EIPs, route outgoing traffic via the Internet Gateway (IGW)
    3. Amazon EC2 instances in private subnet, assign EIPs, route outgoing traffic via the Internet Gateway (IGW)
    4. Amazon EC2 instances in public subnet, assign EIPs, route outgoing traffic via the NAT
  40. You have launched an Amazon Elastic Compute Cloud (EC2) instance into a public subnet with a primary private IP address assigned, an internet gateway is attached to the VPC, and the public route table is configured to send all Internet-based traffic to the Internet gateway. The instance security group is set to allow all outbound traffic but cannot access the Internet. Why is the Internet unreachable from this instance?
    1. The instance does not have a public IP address
    2. The Internet gateway security group must allow all outbound traffic.
    3. The instance security group must allow all inbound traffic.
    4. The instance “Source/Destination check” property must be enabled.
  41. You have an environment that consists of a public subnet using Amazon VPC and 3 instances that are running in this subnet. These three instances can successfully communicate with other hosts on the Internet. You launch a fourth instance in the same subnet, using the same AMI and security group configuration you used for the others, but find that this instance cannot be accessed from the internet. What should you do to enable Internet access?
    1. Deploy a NAT instance into the public subnet.
    2. Assign an Elastic IP address to the fourth instance
    3. Configure a publically routable IP Address in the host OS of the fourth instance.
    4. Modify the routing table for the public subnet.
  42. You have a load balancer configured for VPC, and all back-end Amazon EC2 instances are in service. However, your web browser times out when connecting to the load balancer’s DNS name. Which options are probable causes of this behavior? Choose 2 answers
    1. The load balancer was not configured to use a public subnet with an Internet gateway configured
    2. The Amazon EC2 instances do not have a dynamically allocated private IP address
    3. The security groups or network ACLs are not property configured for web traffic.
    4. The load balancer is not configured in a private subnet with a NAT instance.
    5. The VPC does not have a VGW configured.
  43. When will you incur costs with an Elastic IP address (EIP)?
    1. When an EIP is allocated.
    2. When it is allocated and associated with a running instance.
    3. When it is allocated and associated with a stopped instance.
    4. Costs are incurred regardless of whether the EIP is associated with a running instance.
  44. A company currently has a VPC with EC2 Instances. A new instance being launched, which will host an application that works on IPv6. You need to ensure that this instance can initiate outgoing traffic to the Internet. At the same time, you need to ensure that no incoming connection can be initiated from the Internet on to the instance. Which of the following would you add to the VPC for this requirement?
    1. A NAT Instance
    2. A NAT Gateway
    3. An Internet Gateway
    4. An egress-only Internet gateway

References

AWS_VPC_User_Guide

AWS Global Infrastructure

AWS Global Infrastructure

  • AWS Global Infrastructure enables Amazon Services to be hosted in multiple locations worldwide.
  • AWS Global Infrastructure provides the ability to place resources and data in multiple locations to improve performance, provide fault tolerance, high availability, and cost optimization.
  • AWS Global Infrastructure includes Regions, Availability Zones, Edge Locations, Regional Edge Caches, and Local Zones.

Regions

  • AWS allows customers to place instances and store data within multiple geographic regions called Region.
  • Each region
    • is an independent collection of AWS resources in a defined geography.
    • is a separate geographic area and is completely independent
    • is a physical location around the world with cluster data centers
    • is designed to be completely isolated from the other regions & helps achieve the greatest possible fault tolerance and stability
  • Inter-region communication is across the public Internet and appropriate measures should be taken to protect the data using encryption.
  • Data transfer between regions is charged at the Internet data transfer rate for both the sending and the receiving instances.
  • Resources aren’t replicated across regions unless done explicitly.
  • The selection of a Region can be driven by a lot of factors
    • Latency – Regions can be selected to be close to the targeted user base to reduce data latency
    • Cost – AWS provides the same set of services across all regions, usually, however, the cost would differ from region to region depending upon the cost (due to land, electricity, bandwidth, etc) incurred by Amazon and hence can be cheaper in one region compared to the other
    • Legal Compliance – A lot of the countries enforce compliance and regulatory requirements for data to reside within the region itself
    • Features – As not all the regions provide all the AWS features and services, the region selection can depend on the Services supported by the region

Availability Zones

  • Each Region consists of multiple, isolated locations known as Availability Zones and each Availability Zone runs on its own physically distinct, independent infrastructure and is engineered to be highly reliable.
  • Each Region has multiple, isolated Availability Zones (ranging from 2-6).
  • Each AZ has independent power, cooling, and physical security and is connected via redundant, ultra-low-latency networks.
  • Each AZ is physically isolated from the others so that an uncommon disaster such as fire, or earthquake would only affect a single AZ.
  • AZs are geographically separated from each other, within the same region, and act as an independent failure zone.
  • AZs are redundantly connected to multiple tier-1 transit providers.
  • All AZs in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs.
  • All traffic between AZs is encrypted.
  • Multi-AZ feature, distribution of resources across multiple AZs, can be used to distribute instances across multiple AZ to provide High Availability
  • AWS ensures that resources are distributed across the AZs for a region by independently mapping AZs to identifiers for each account. for e.g. us-east-1 region with us-east-1a AZ might not be the same location as us-east-1a AZ for another account. There’s no way for you to coordinate AZs between accounts.

Edge Locations

  • Edge locations are locations maintained by AWS through a worldwide network of data centers for the distribution of content.
  • Edge locations are connected to the AWS Regions through the AWS network backbone – fully redundant, multiple 100GbE parallel fiber that circles the globe and links with tens of thousands of networks for improved origin fetches and dynamic content acceleration.
  • These locations are located in most of the major cities around the world and are used by CloudFront (CDN) to distribute content to end-user to reduce latency.

AWS Local Zones

  • AWS Local Zones place compute, storage, database, and other select AWS services closer to end-users.
  • Local Zones allow running highly demanding applications that require single-digit millisecond latencies to the end-users such as media & entertainment content creation, real-time gaming, reservoir simulations, electronic design automation, and machine learning.
  • Each AWS Local Zone location is an extension of an AWS Region where latency-sensitive applications can be hosted. using AWS services such as EC2, VPC, EBS, File Storage, and ELB in geographic proximity to end-users.
  • AWS Local Zones provide a high-bandwidth, secure connection between local workloads and those running in the AWS Region.
  • AWS Local Zones helps to seamlessly connect to the full range of services in the AWS Region such as S3 and DynamoDB through the same APIs and toolsets over AWS’s private and high bandwidth network backbone.

AWS Local Zones

AWS Wavelength

  • AWS Wavelength embeds AWS compute and storage services within 5G networks, providing mobile edge computing infrastructure for developing, deploying, and scaling ultra-low-latency applications.
  • AWS Wavelength helps seamlessly access the breadth of AWS services in the region.
  • AWS Wavelength brings AWS services to the edge of the 5G network, minimizing the latency to connect to an application from a mobile device.
  • Application traffic can reach application servers running in Wavelength Zones without leaving the mobile provider’s network reducing the extra network hops to the Internet that can result in latencies of more than 100 milliseconds, preventing customers from taking full advantage of the bandwidth and latency advancements of 5G.
  • AWS developers can deploy the applications to Wavelength Zones, which enables developers to build applications that deliver single-digit millisecond latencies to mobile devices and end-users.
  • AWS Wavelength helps deliver applications that require single-digit millisecond latencies such as game and live video streaming, machine learning inference at the edge, and augmented and virtual reality (AR/VR).
  • Wavelength Zones are not available in every Region.

AWS Wavelength Zones

AWS Outposts

  • AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises.
  • Outposts bring native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility.
  • Outposts provide the same AWS APIs, tools, and infrastructure across on-premises and AWS cloud to deliver a truly consistent hybrid experience
  • AWS operates, monitors, and manages this capacity as part of an AWS Region.
  • Outposts are designed for connected environments and can be used to support workloads that need to remain on-premises due to low latency or local data processing needs.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. George has launched three EC2 instances inside the US-East-1a zone with his AWS account. Ray has launched two EC2 instances in the US-East-1a zone with his AWS account. Which of the below mentioned statements will help George and Ray understand the availability zone (AZ) concept better?
    1. The instances of George and Ray will be running in the same data centre.
    2. All the instances of George and Ray can communicate over a private IP with a minimal cost
    3. All the instances of George and Ray can communicate over a private IP without any cost
    4. us-east-1a region of George and Ray can be different availability zones (Refer linkAn Availability Zone is represented by a region code followed by a letter identifier; for example, us-east-1a. To ensure that resources are distributed across the Availability Zones for a region, we independently map Availability Zones to identifiers for each account. For example, your Availability Zone us-east-1a might not be the same location as us-east-1a for another account. There’s no way for you to coordinate Availability Zones between accounts.)

Reference

AWS_Global_Infrastructure

AWS Auto Scaling

Auto Scaling Overview

  • Auto Scaling provides the ability to ensure a correct number of EC2 instances are always running to handle the load of the application
  • Auto Scaling helps
    • to achieve better fault tolerance, better availability and cost management.
    • helps specify scaling policies that can be used to launch and terminate EC2 instances to handle any increase or decrease in demand.
  • Auto Scaling attempts to distribute instances evenly between the AZs that are enabled for the Auto Scaling group.
  • Auto Scaling does this by attempting to launch new instances in the AZ with the fewest instances. If the attempt fails, it attempts to launch the instances in another AZ until it succeeds.

Auto Scaling Components

AWS Auto Scaling

Auto Scaling Groups – ASG

  • Auto Scaling groups are the core of Auto Scaling and contain a collection of EC2 instances that share similar characteristics and are treated as a logical grouping for the purposes of automatic scaling and management.
  • ASG requires
    • Launch configuration OR Launch Template
      • determine the EC2 template to use for launching the instance
    • Minimum & Maximum capacity
      • determine the number of instances when an autoscaling policy is applied.
      • Number of instances cannot grow beyond these boundaries
    • Desired capacity
      • to determine the number of instances the ASG must maintain at all times. If missing, it equals the minimum size. 
      • Desired capacity is different from minimum capacity.
      • An Auto Scaling group’s desired capacity is the default number of instances that should be running. A group’s minimum capacity is the fewest number of instances the group can have running
    • Availability Zones or Subnets in which the instances will be launched.
    • Metrics & Health Checks
      • metrics to determine when it should launch or terminate instances and health checks to determine if the instance is healthy or not
  • ASG starts by launching a desired capacity of instances and maintains this number by performing periodic health checks.
  • If an instance becomes unhealthy, the ASG terminates and launches a new instance.
  • ASG can also use scaling policies to increase or decrease the number of instances automatically to meet changing demands
  • An ASG can contain EC2 instances in one or more AZs within the same region.
  • ASGs cannot span multiple regions.
  • ASG can launch On-Demand Instances, Spot Instances, or both when configured to use a launch template.
  • To merge separate single-zone ASGs into a single ASG spanning multiple AZs, rezone one of the single-zone groups into a multi-zone group, and then delete the other groups. This process works for groups with or without a load balancer, as long as the new multi-zone group is in one of the same AZs as the original single-zone groups.
  • ASG can be associated with a single launch configuration or template
  • As the Launch Configuration can’t be modified once created, the only way to update the Launch Configuration for an ASG is to create a new one and associate it with the ASG.
  • When the launch configuration for the ASG is changed, any new instances launched, use the new configuration parameters, but the existing instances are not affected.
  • ASG can be deleted from CLI, if it has no running instances else need to set the minimum and desired capacity to 0. This is handled automatically when deleting an ASG from the AWS management console.

Launch Configuration

  • Launch configuration is an instance configuration template that an ASG uses to launch EC2 instances.
  • Launch configuration is similar to EC2 configuration and involves the selection of the Amazon Machine Image (AMI), block devices, key pair, instance type, security groups, user data, EC2 instance monitoring, instance profile, kernel, ramdisk, the instance tenancy, whether the instance has a public IP address, and is EBS-optimized.
  • Launch configuration can be associated with multiple ASGs
  • Launch configuration can’t be modified after creation and needs to be created new if any modification is required.
  • Basic or detailed monitoring for the instances in the ASG can be enabled when a launch configuration is created.
  • By default, basic monitoring is enabled when you create the launch configuration using the AWS Management Console, and detailed monitoring is enabled when you create the launch configuration using the AWS CLI or an API
  • AWS recommends using Launch Template instead.

Launch Template

  • A Launch Template is similar to a launch configuration, with additional features, and is recommended by AWS.
  • Launch Template allows multiple versions of a template to be defined.
  • With versioning, a subset of the full set of parameters can be created and then reused to create other templates or template versions for e.g, a default template that defines common configuration parameters can be created and allow the other parameters to be specified as part of another version of the same template.
  • Launch Template allows the selection of both Spot and On-Demand Instances or multiple instance types.
  • Launch templates support EC2 Dedicated Hosts. Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use.
  • Launch templates provide the following features
    • Support for multiple instance types and purchase options in a single ASG.
    • Launching Spot Instances with the capacity-optimized allocation strategy.
    • Support for launching instances into existing Capacity Reservations through an ASG.
    • Support for unlimited mode for burstable performance instances.
    • Support for Dedicated Hosts.
    • Combining CPU architectures such as Intel, AMD, and ARM (Graviton2)
    • Improved governance through IAM controls and versioning.
    • Automating instance deployment with Instance Refresh.

Auto Scaling Launch Configuration vs Launch Template

Auto Scaling Launch Template vs Launch Configuration

Auto Scaling Policies

Refer blog post @ Auto Scaling Policies

Auto Scaling Cooldown Period

  • Auto Scaling Cooldown period is a configurable setting for the ASG that helps to ensure that Auto Scaling doesn’t launch or terminate additional instances before the previous scaling activity takes effect and allows the newly launched instances to start handling traffic and reduce load
  • When ASG dynamically scales using a simple scaling policy and launches an instance, Auto Scaling suspends the scaling activities for the cooldown period (default 300 seconds) to complete before resuming scaling activities
  • Example Use Case
    • You configure a scale out alarm to increase the capacity, if the CPU utilization increases more than 80%
    • A CPU spike occurs and causes the alarm to be triggered, Auto Scaling launches a new instance
    • However, it would take time for the newly launched instance to be configured, instantiated, and started, let’s say 5 mins
    • Without a cooldown period, if another CPU spike occurs Auto Scaling would launch a new instance again and this would continue for 5 mins till the previously launched instance is up and running and started handling traffic
    • With a cooldown period, Auto Scaling would suspend the activity for the specified time period enabling the newly launched instance to start handling traffic and reduce the load.
    • After the cooldown period, Auto Scaling resumes acting on the alarms
  • When manually scaling the ASG, the default is not to wait for the cooldown period but can be overridden to honour the cooldown period.
  • Note that if an instance becomes unhealthy, Auto Scaling does not wait for the cooldown period to complete before replacing the unhealthy instance.
  • Cooldown periods are automatically applied to dynamic scaling activities for simple scaling policies and are not supported for step scaling policies.

Auto Scaling Termination Policy

  • Termination policy helps Auto Scaling decide which instances it should terminate first when Auto Scaling automatically scales in.
  • Auto Scaling specifies a default termination policy and also provides the ability to create a customized one.

Default Termination Policy

Default termination policy helps ensure that the network architecture spans AZs evenly and instances are selected for termination as follows:-

  1. Selection of Availability Zone
    • selects the AZ, in multiple AZs environments, with the most instances and at least one instance that is not protected from scale in.
    • selects the AZ with instances that use the oldest launch configuration, if there is more than one AZ with the same number of instances
  2. Selection of an Instance within the Availability Zone
    • terminates the unprotected instance using the oldest launch configuration if one exists.
    • terminates unprotected instances closest to the next billing hour, If multiple instances with the oldest launch configuration. This helps in maximizing the use of the EC2 instances that have an hourly charge while minimizing the number of hours billed for EC2 usage.
    • terminates instances at random, if more than one unprotected instance is closest to the next billing hour.

Customized Termination Policy

  1. Auto Scaling first assesses the AZs for any imbalance. If an AZ has more instances than the other AZs that are used by the group, then it applies the specified termination policy on the instances from the imbalanced AZ
  2. If the Availability Zones used by the group are balanced, then Auto Scaling applies the specified termination policy.
  3. Following Customized Termination, policies are supported
    1. OldestInstance – terminates the oldest instance in the group and can be useful to upgrade to new instance types
    2. NewestInstance – terminates the newest instance in the group and can be useful when testing a new launch configuration
    3. OldestLaunchConfiguration – terminates instances that have the oldest launch configuration
    4. OldestLaunchTemplate – terminates instances that have the oldest launch template
    5. ClosestToNextInstanceHour – terminates instances that are closest to the next billing hour and helps to maximize the use of your instances and manage costs.
    6. Default – terminates as per the default termination policy

Instance Refresh

  • Instance refresh can be used to update the instances in the ASG instead of manually replacing instances a few at a time.
  • An instance refresh can be helpful when you have a new AMI or a new user data script.
  • Instance refresh also helps configure the minimum healthy percentage, instance warmup, and checkpoints.
  • To use an instance refresh
    • Create a new launch template that specifies the new AMI or user data script.
    • Start an instance refresh to begin updating the instances in the group immediately.
    • EC2 Auto Scaling starts performing a rolling replacement of the instances.

Instance Protection

  • Instance protection controls whether Auto Scaling can terminate a particular instance or not.
  • Instance protection can be enabled on an ASG or an individual instance as well, at any time
  • Instances launched within an ASG with Instance protection enabled would inherit the property.
  • Instance protection starts as soon as the instance is InService and if the Instance is detached, it loses its Instance protection
  • If all instances in an ASG are protected from termination during scale in and a scale-in event occurs, it can’t terminate any instance and will decrement the desired capacity.
  • Instance protection does not protect for the below cases
    • Manual termination through the EC2 console, the terminate-instances command, or the TerminateInstances API.
    • If it fails health checks and must be replaced
    • Spot instances in an ASG from interruption

Standby State

Auto Scaling allows putting the InService instances in the Standby state during which the instance is still a part of the ASG but does not serve any requests. This can be used to either troubleshoot an instance or update an instance and return the instance back to service.

  • An instance can be put into Standby state and it will continue to remain in the Standby state unless exited.
  • Auto Scaling, by default, decrements the desired capacity for the group and prevents it from launching a new instance. If no decrement is selected, it would launch a new instance
  • When the instance is in the standby state, the instance can be updated or used for troubleshooting.
  • If a load balancer is associated with Auto Scaling, the instance is automatically deregistered when the instance is in Standby state and registered again when the instance exits the Standby state

Suspension

  • Auto Scaling processes can be suspended and then resumed. This can be very useful to investigate a configuration problem or debug an issue with the application, without triggering the Auto Scaling process.
  • Auto Scaling also performs Administrative Suspension where it would suspend processes for ASGs if the ASG has been trying to launch instances for over 24 hours but has not succeeded in launching any instances.
  • Auto Scaling processes include
    • Launch – Adds a new EC2 instance to the group, increasing its capacity.
    • Terminate – Removes an EC2 instance from the group, decreasing its capacity.
    • HealthCheck – Checks the health of the instances.
    • ReplaceUnhealthy – Terminates instances that are marked as unhealthy and subsequently creates new instances to replace them.
    • AlarmNotification – Accepts notifications from CloudWatch alarms that are associated with the group. If suspended, Auto Scaling does not automatically execute policies that would be triggered by an alarm
    • ScheduledActions – Performs scheduled actions that you create.
    • AddToLoadBalancer – Adds instances to the load balancer when they are launched.
    • InstanceRefresh – Terminates and replaces instances using the instance refresh feature.
    • AZRebalance – Balances the number of EC2 instances in the group across the Availability Zones in the region.
      • If an AZ either is removed from the ASG or becomes unhealthy or unavailable, Auto Scaling launches new instances in an unaffected AZ before terminating the unhealthy or unavailable instances
      • When the unhealthy AZ returns to a healthy state, Auto Scaling automatically redistributes the instances evenly across the Availability Zones for the group.
      • Note that if you suspend AZRebalance and a scale out or scale in event occurs, Auto Scaling still tries to balance the Availability Zones for e.g. during scale out, it launches the instance in the Availability Zone with the fewest instances.
      • If you suspend Launch, AZRebalance neither launches new instances nor terminates existing instances. This is because AZRebalance terminates instances only after launching the replacement instances.
      • If you suspend Terminate, the ASG can grow up to 10% larger than its maximum size, because Auto Scaling allows this temporarily during rebalancing activities. If it cannot terminate instances, your ASG could remain above its maximum size until the Terminate process is resumed

Auto Scaling Lifecycle

Refer to blog post @ Auto Scaling Lifecycle

Autoscaling & ELB

Refer to blog post @ Autoscaling & ELB

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A user is trying to setup a scheduled scaling activity using Auto Scaling. The user wants to setup the recurring schedule. Which of the below mentioned parameters is not required in this case?
    1. Maximum size
    2. Auto Scaling group name
    3. End time
    4. Recurrence value
  2. A user has configured Auto Scaling with 3 instances. The user had created a new AMI after updating one of the instances. If the user wants to terminate two specific instances to ensure that Auto Scaling launches an instances with the new launch configuration, which command should he run?
    1. as-delete-instance-in-auto-scaling-group <Instance ID> –no-decrement-desired-capacity
    2. as-terminate-instance-in-auto-scaling-group <Instance ID> –update-desired-capacity
    3. as-terminate-instance-in-auto-scaling-group <Instance ID> –decrement-desired-capacity
    4. as-terminate-instance-in-auto-scaling-group <Instance ID> –no-decrement-desired-capacity
  3. A user is planning to scale up an application by 8 AM and scale down by 7 PM daily using Auto Scaling. What should the user do in this case?
    1. Setup the scaling policy to scale up and down based on the CloudWatch alarms
    2. User should increase the desired capacity at 8 AM and decrease it by 7 PM manually
    3. User should setup a batch process which launches the EC2 instance at a specific time
    4. Setup scheduled actions to scale up or down at a specific time
  4. An organization has setup Auto Scaling with ELB. Due to some manual error, one of the instances got rebooted. Thus, it failed the Auto Scaling health check. Auto Scaling has marked it for replacement. How can the system admin ensure that the instance does not get terminated?
    1. Update the Auto Scaling group to ignore the instance reboot event
    2. It is not possible to change the status once it is marked for replacement
    3. Manually add that instance to the Auto Scaling group after reboot to avoid replacement
    4. Change the health of the instance to healthy using the Auto Scaling commands
  5. A user has configured Auto Scaling with the minimum capacity as 2 and the desired capacity as 2. The user is trying to terminate one of the existing instance with the command: as-terminate-instance-in-auto-scaling-group<Instance ID> –decrement-desired-capacity. What will Auto Scaling do in this scenario?
    1. Terminates the instance and does not launch a new instance
    2. Terminates the instance and updates the desired capacity to 1
    3. Terminates the instance and updates the desired capacity & minimum size to 1
    4. Throws an error
  6. An organization has configured Auto Scaling for hosting their application. The system admin wants to understand the Auto Scaling health check process. If the instance is unhealthy, Auto Scaling launches an instance and terminates the unhealthy instance. What is the order execution?
    1. Auto Scaling launches a new instance first and then terminates the unhealthy instance
    2. Auto Scaling performs the launch and terminate processes in a random order
    3. Auto Scaling launches and terminates the instances simultaneously
    4. Auto Scaling terminates the instance first and then launches a new instance
  7. A user has configured ELB with Auto Scaling. The user suspended the Auto Scaling terminate process only for a while. What will happen to the availability zone rebalancing process (AZRebalance) during this period?
    1. Auto Scaling will not launch or terminate any instances
    2. Auto Scaling will allow the instances to grow more than the maximum size
    3. Auto Scaling will keep launching instances till the maximum instance size
    4. It is not possible to suspend the terminate process while keeping the launch active
  8. An organization has configured Auto Scaling with ELB. There is a memory issue in the application which is causing CPU utilization to go above 90%. The higher CPU usage triggers an event for Auto Scaling as per the scaling policy. If the user wants to find the root cause inside the application without triggering a scaling activity, how can he achieve this?
    1. Stop the scaling process until research is completed
    2. It is not possible to find the root cause from that instance without triggering scaling
    3. Delete Auto Scaling until research is completed
    4. Suspend the scaling process until research is completed
  9. A user has configured ELB with Auto Scaling. The user suspended the Auto Scaling Alarm Notification (which notifies Auto Scaling for CloudWatch alarms) process for a while. What will Auto Scaling do during this period?
    1. AWS will not receive the alarms from CloudWatch
    2. AWS will receive the alarms but will not execute the Auto Scaling policy
    3. Auto Scaling will execute the policy but it will not launch the instances until the process is resumed
    4. It is not possible to suspend the AlarmNotification process
  10. An organization has configured two single availability zones. The Auto Scaling groups are configured in separate zones. The user wants to merge the groups such that one group spans across multiple zones. How can the user configure this?
    1. Run the command as-join-auto-scaling-group to join the two groups
    2. Run the command as-update-auto-scaling-group to configure one group to span across zones and delete the other group
    3. Run the command as-copy-auto-scaling-group to join the two groups
    4. Run the command as-merge-auto-scaling-group to merge the groups
  11. An organization has configured Auto Scaling with ELB. One of the instance health check returns the status as Impaired to Auto Scaling. What will Auto Scaling do in this scenario?
    1. Perform a health check until cool down before declaring that the instance has failed
    2. Terminate the instance and launch a new instance
    3. Notify the user using SNS for the failed state
    4. Notify ELB to stop sending traffic to the impaired instance
  12. A user has setup an Auto Scaling group. The group has failed to launch a single instance for more than 24 hours. What will happen to Auto Scaling in this condition
    1. Auto Scaling will keep trying to launch the instance for 72 hours
    2. Auto Scaling will suspend the scaling process
    3. Auto Scaling will start an instance in a separate region
    4. The Auto Scaling group will be terminated automatically
  13. A user is planning to setup infrastructure on AWS for the Christmas sales. The user is planning to use Auto Scaling based on the schedule for proactive scaling. What advise would you give to the user?
    1. It is good to schedule now because if the user forgets later on it will not scale up
    2. The scaling should be setup only one week before Christmas
    3. Wait till end of November before scheduling the activity
    4. It is not advisable to use scheduled based scaling
  14. A user is trying to setup a recurring Auto Scaling process. The user has setup one process to scale up every day at 8 am and scale down at 7 PM. The user is trying to setup another recurring process which scales up on the 1st of every month at 8 AM and scales down the same day at 7 PM. What will Auto Scaling do in this scenario
    1. Auto Scaling will execute both processes but will add just one instance on the 1st
    2. Auto Scaling will add two instances on the 1st of the month
    3. Auto Scaling will schedule both the processes but execute only one process randomly
    4. Auto Scaling will throw an error since there is a conflict in the schedule of two separate Auto Scaling Processes
  15. A sys admin is trying to understand the Auto Scaling activities. Which of the below mentioned processes is not performed by Auto Scaling?
    1. Reboot Instance
    2. Schedule Actions
    3. Replace Unhealthy
    4. Availability Zone Re-Balancing
  16. You have started a new job and are reviewing your company’s infrastructure on AWS. You notice one web application where they have an Elastic Load Balancer in front of web instances in an Auto Scaling Group. When you check the metrics for the ELB in CloudWatch you see four healthy instances in Availability Zone (AZ) A and zero in AZ B. There are zero unhealthy instances. What do you need to fix to balance the instances across AZs?
    1. Set the ELB to only be attached to another AZ
    2. Make sure Auto Scaling is configured to launch in both AZs
    3. Make sure your AMI is available in both AZs
    4. Make sure the maximum size of the Auto Scaling Group is greater than 4
  17. You have been asked to leverage Amazon VPC EC2 and SQS to implement an application that submits and receives millions of messages per second to a message queue. You want to ensure your application has sufficient bandwidth between your EC2 instances and SQS. Which option will provide the most scalable solution for communicating between the application and SQS?
    1. Ensure the application instances are properly configured with an Elastic Load Balancer
    2. Ensure the application instances are launched in private subnets with the EBS-optimized option enabled
    3. Ensure the application instances are launched in public subnets with the associate-public-IP-address=trueoption enabled
    4. Launch application instances in private subnets with an Auto Scaling group and Auto Scaling triggers configured to watch the SQS queue size
  18. You have decided to change the Instance type for instances running in your application tier that are using Auto Scaling. In which area below would you change the instance type definition?
    1. Auto Scaling launch configuration
    2. Auto Scaling group
    3. Auto Scaling policy
    4. Auto Scaling tags
  19. A user is trying to delete an Auto Scaling group from CLI. Which of the below mentioned steps are to be performed by the user?
    1. Terminate the instances with the ec2-terminate-instance command
    2. Terminate the Auto Scaling instances with the as-terminate-instance command
    3. Set the minimum size and desired capacity to 0
    4. There is no need to change the capacity. Run the as-delete-group command and it will reset all values to 0
  20. A user has created a web application with Auto Scaling. The user is regularly monitoring the application and he observed that the traffic is highest on Thursday and Friday between 8 AM to 6 PM. What is the best solution to handle scaling in this case?
    1. Add a new instance manually by 8 AM Thursday and terminate the same by 6 PM Friday
    2. Schedule Auto Scaling to scale up by 8 AM Thursday and scale down after 6 PM on Friday
    3. Schedule a policy which may scale up every day at 8 AM and scales down by 6 PM
    4. Configure a batch process to add a instance by 8 AM and remove it by Friday 6 PM
  21. A user has configured the Auto Scaling group with the minimum capacity as 3 and the maximum capacity as 5. When the user configures the AS group, how many instances will Auto Scaling launch?
    1. 3
    2. 0
    3. 5
    4. 2
  22. A sys admin is maintaining an application on AWS. The application is installed on EC2 and user has configured ELB and Auto Scaling. Considering future load increase, the user is planning to launch new servers proactively so that they get registered with ELB. How can the user add these instances with Auto Scaling?
    1. Increase the desired capacity of the Auto Scaling group
    2. Increase the maximum limit of the Auto Scaling group
    3. Launch an instance manually and register it with ELB on the fly
    4. Decrease the minimum limit of the Auto Scaling group
  23. In reviewing the auto scaling events for your application you notice that your application is scaling up and down multiple times in the same hour. What design choice could you make to optimize for the cost while preserving elasticity? Choose 2 answers.
    1. Modify the Amazon CloudWatch alarm period that triggers your auto scaling scale down policy.
    2. Modify the Auto scaling group termination policy to terminate the oldest instance first.
    3. Modify the Auto scaling policy to use scheduled scaling actions.
    4. Modify the Auto scaling group cool down timers.
    5. Modify the Auto scaling group termination policy to terminate newest instance first.
  24. You have a business critical two tier web app currently deployed in two availability zones in a single region, using Elastic Load Balancing and Auto Scaling. The app depends on synchronous replication (very low latency connectivity) at the database layer. The application needs to remain fully available even if one application Availability Zone goes off-line, and Auto scaling cannot launch new instances in the remaining Availability Zones. How can the current architecture be enhanced to ensure this? [PROFESSIONAL]
    1. Deploy in two regions using Weighted Round Robin (WRR), with Auto Scaling minimums set for 100% peak load per region.
    2. Deploy in three AZs, with Auto Scaling minimum set to handle 50% peak load per zone.
    3. Deploy in three AZs, with Auto Scaling minimum set to handle 33% peak load per zone. (Loss of one AZ will handle only 66% if the autoscaling also fails)
    4. Deploy in two regions using Weighted Round Robin (WRR), with Auto Scaling minimums set for 50% peak load per region.
  25. A user has created a launch configuration for Auto Scaling where CloudWatch detailed monitoring is disabled. The user wants to now enable detailed monitoring. How can the user achieve this?
    1. Update the Launch config with CLI to set InstanceMonitoringDisabled = false
    2. The user should change the Auto Scaling group from the AWS console to enable detailed monitoring
    3. Update the Launch config with CLI to set InstanceMonitoring.Enabled = true
    4. Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group
  26. A user has created an Auto Scaling group with default configurations from CLI. The user wants to setup the CloudWatch alarm on the EC2 instances, which are launched by the Auto Scaling group. The user has setup an alarm to monitor the CPU utilization every minute. Which of the below mentioned statements is true?
    1. It will fetch the data at every minute but the four data points [corresponding to 4 minutes] will not have value since the EC2 basic monitoring metrics are collected every five minutes
    2. It will fetch the data at every minute as detailed monitoring on EC2 will be enabled by the default launch configuration of Auto Scaling
    3. The alarm creation will fail since the user has not enabled detailed monitoring on the EC2 instances
    4. The user has to first enable detailed monitoring on the EC2 instances to support alarm monitoring at every minute
  27. A customer has a website which shows all the deals available across the market. The site experiences a load of 5 large EC2 instances generally. However, a week before Thanksgiving vacation they encounter a load of almost 20 large instances. The load during that period varies over the day based on the office timings. Which of the below mentioned solutions is cost effective as well as help the website achieve better performance?
    1. Keep only 10 instances running and manually launch 10 instances every day during office hours.
    2. Setup to run 10 instances during the pre-vacation period and only scale up during the office time by launching 10 more instances using the AutoScaling schedule.
    3. During the pre-vacation period setup a scenario where the organization has 15 instances running and 5 instances to scale up and down using Auto Scaling based on the network I/O policy.
    4. During the pre-vacation period setup 20 instances to run continuously.
  28. When Auto Scaling is launching a new instance based on condition, which of the below mentioned policies will it follow?
    1. Based on the criteria defined with cross zone Load balancing
    2. Launch an instance which has the highest load distribution
    3. Launch an instance in the AZ with the fewest instances
    4. Launch an instance in the AZ which has the highest instances
  29. The user has created multiple AutoScaling groups. The user is trying to create a new AS group but it fails. How can the user know that he has reached the AS group limit specified by AutoScaling in that region?
    1. Run the command: as-describe-account-limits
    2. Run the command: as-describe-group-limits
    3. Run the command: as-max-account-limits
    4. Run the command: as-list-account-limits
  30. A user is trying to save some cost on the AWS services. Which of the below mentioned options will not help him save cost?
    1. Delete the unutilized EBS volumes once the instance is terminated
    2. Delete the Auto Scaling launch configuration after the instances are terminated (Auto Scaling Launch config does not cost anything)
    3. Release the elastic IP if not required once the instance is terminated
    4. Delete the AWS ELB after the instances are terminated
  31. To scale up the AWS resources using manual Auto Scaling, which of the below mentioned parameters should the user change?
    1. Maximum capacity
    2. Desired capacity
    3. Preferred capacity
    4. Current capacity
  32. For AWS Auto Scaling, what is the first transition state an existing instance enters after leaving steady state in Standby mode?
    1. Detaching
    2. Terminating:Wait
    3. Pending (You can put any instance that is in an InService state into a Standby state. This enables you to remove the instance from service, troubleshoot or make changes to it, and then put it back into service. Instances in a Standby state continue to be managed by the Auto Scaling group. However, they are not an active part of your application until you put them back into service. Refer link)
    4. EnteringStandby
  33. For AWS Auto Scaling, what is the first transition state an instance enters after leaving steady state when scaling in due to health check failure or decreased load?
    1. Terminating (When Auto Scaling responds to a scale in event, it terminates one or more instances. These instances are detached from the Auto Scaling group and enter the Terminating state. Refer link)
    2. Detaching
    3. Terminating:Wait
    4. EnteringStandby
  34. A user has setup Auto Scaling with ELB on the EC2 instances. The user wants to configure that whenever the CPU utilization is below 10%, Auto Scaling should remove one instance. How can the user configure this?
    1. The user can get an email using SNS when the CPU utilization is less than 10%. The user can use the desired capacity of Auto Scaling to remove the instance
    2. Use CloudWatch to monitor the data and Auto Scaling to remove the instances using scheduled actions
    3. Configure CloudWatch to send a notification to Auto Scaling Launch configuration when the CPU utilization is less than 10% and configure the Auto Scaling policy to remove the instance
    4. Configure CloudWatch to send a notification to the Auto Scaling group when the CPU Utilization is less than 10% and configure the Auto Scaling policy to remove the instance
  35. A user has enabled detailed CloudWatch metric monitoring on an Auto Scaling group. Which of the below mentioned metrics will help the user identify the total number of instances in an Auto Scaling group including pending, terminating and running instances?
    1. GroupTotalInstances (Refer link)
    2. GroupSumInstances
    3. It is not possible to get a count of all the three metrics together. The user has to find the individual number of running, terminating and pending instances and sum it
    4. GroupInstancesCount
  36. Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency then dispatched to your manufacturing plant for production quality control packaging shipment and payment processing. If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step. Customers are notified via email about order status and any critical issues with their orders such as payment failure. Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders. How can you implement the order fulfillment process while making sure that the emails are delivered reliably? [PROFESSIONAL]
    1. Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.
    2. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 Use the decider instance to send emails to customers.
    3. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.
    4. Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.

References

AWS_Auto_Scaling_Developer_Guide

 

AWS ELB Monitoring

AWS ELB Monitoring

  • Elastic Load Balancing publishes data points to CloudWatch about the load balancers and back-end instances
  • Elastic Load Balancing reports metrics to CloudWatch only when requests are flowing through the load balancer.
    • If there are requests flowing through the load balancer, Elastic Load Balancing measures and sends its metrics in 60-second intervals.
    • If there are no requests flowing through the load balancer or no data for a metric, the metric is not reported.

CloudWatch Metrics

  • HealthyHostCount, UnHealthyHostCount
    • Number of healthy and unhealthy instances registered with the load balancer.
    • Most useful statistics are average, min, and max
  • RequestCount
    • Number of requests completed or connections made during the specified interval (1 or 5 minutes).
    • Most useful statistic is sum
  • Latency
    • Time elapsed, in seconds, after the request leaves the load balancer until the headers of the response are received.
    • Most useful statistic is average
  • SurgeQueueLength
    • Total number of requests that are pending routing.
    • Load balancer queues a request if it is unable to establish a connection with a healthy instance in order to route the request.
    • Maximum size of the queue is 1,024. Additional requests are rejected when the queue is full.
    • Most useful statistic is max, because it represents the peak of queued requests.
  • SpilloverCount
    • The total number of requests that were rejected because the surge queue is full. Should ideally be 0
    • Most useful statistic is sum.
  • HTTPCode_ELB_4XX, HTTPCode_ELB_5XX
    • Client and Server error code generated by the load balancer
    • Most useful statistic is sum.
  • HTTPCode_Backend_2XX, HTTPCode_Backend_3XX, HTTPCode_Backend_4XX, HTTPCode_Backend_5XX
    • Number of HTTP response codes generated by registered instances
    • Most useful statistic is sum.

Elastic Load Balancer Access Logs

  • Elastic Load Balancing provides access logs that capture detailed information about all requests sent to your load balancer.
  • Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses.
  • Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket
  • Access logging is disabled by default and can be enabled without any additional charge. You are only charged for S3 storage

CloudTrail Logs

  • AWS CloudTrail can be used to capture all calls to the Elastic Load Balancing API made by or on behalf of your AWS account and either made using Elastic Load Balancing API directly or indirectly through the AWS Management Console or AWS CLI
  • CloudTrail stores the information as log files in an Amazon S3 bucket that you specify.
  • Logs collected by CloudTrail can be used to monitor the activity of your load balancers and determine what API call was made, what source IP address was used, who made the call, when it was made, and so on

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An admin is planning to monitor the ELB. Which of the below mentioned services does not help the admin capture the monitoring information about the ELB activity
    1. ELB Access logs
    2. ELB health check
    3. CloudWatch metrics
    4. ELB API calls with CloudTrail
  2. A customer needs to capture all client connection information from their load balancer every five minutes. The company wants to use this data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements?
    1. Enable AWS CloudTrail for the load balancer.
    2. Enable access logs on the load balancer.
    3. Install the Amazon CloudWatch Logs agent on the load balancer.
    4. Enable Amazon CloudWatch metrics on the load balancer
  3. Your supervisor has requested a way to analyze traffic patterns for your application. You need to capture all connection information from your load balancer every 10 minutes. Pick a solution from below. Choose the correct answer:
    1. Enable access logs on the load balancer
    2. Create a custom metric CloudWatch filter on your load balancer
    3. Use a CloudWatch Logs Agent
    4. Use AWS CloudTrail with your load balancer

References

Elastic Load Balance developer guide

AWS Bastion Host

Bastion Host Overview

  • Bastion means a structure for Fortification to protect things behind it
  • In AWS, a Bastion host (also referred to as a Jump server) can be used to securely access instances in the private subnets.
  • Bastion host launched in the Public subnets would act as a primary access point from the Internet and acts as a proxy to other instances.

Bastion Host

Key points

  • Bastion host is deployed in the Public subnet and acts as a proxy or a gateway between you and your instances
  • Bastion host is a security measure that helps to reduce attack on your infrastructure and you have to concentrate to hardening a single layer
  • Bastion host allows you to login to instances in the Private subnet securely without having to store the private keys on the Bastion host (using ssh-agent forwarding or RDP gateways)
  • Bastion host security can be further tightened to allow SSH/RDP access from specific trusted IPs or corporate IP ranges
  • Bastion host for your AWS infrastructure shouldn’t be used for any other purpose, as that could open unnecessary security holes
  • Security for all the Instances in the private subnet should be hardened to accept SSH/RDP connections only from the Bastion host
  • Deploy a Bastion host within each Availability Zone for HA, cause if the Bastion instance or the AZ hosting the Bastion server goes down the ability to connect to your private instances is lost completely

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A customer is running a multi-tier web application farm in a virtual private cloud (VPC) that is not connected to their corporate network. They are connecting to the VPC over the Internet to manage all of their Amazon EC2 instances running in both the public and private subnets. They have only authorized the bastion-security-group with Microsoft Remote Desktop Protocol (RDP) access to the application instance security groups, but the company wants to further limit administrative access to all of the instances in the VPC. Which of the following Bastion deployment scenarios will meet this requirement?
    1. Deploy a Windows Bastion host on the corporate network that has RDP access to all instances in the VPC.
    2. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow SSH access to the bastion from anywhere.
    3. Deploy a Windows Bastion host with an Elastic IP address in the private subnet, and restrict RDP access to the bastion from only the corporate public IP addresses.
    4. Deploy a Windows Bastion host with an auto-assigned Public IP address in the public subnet, and allow RDP access to the bastion from only the corporate public IP addresses.
  2. You are designing a system that has a Bastion host. This component needs to be highly available without human intervention. Which of the following approaches would you select?
    1. Run the bastion on two instances one in each AZ
    2. Run the bastion on an active Instance in one AZ and have an AMI ready to boot up in the event of failure
    3. Configure the bastion instance in an Auto Scaling group Specify the Auto Scaling group to include multiple AZs but have a min-size of 1 and max-size of 1
    4. Configure an ELB in front of the bastion instance
  3. You’ve been brought in as solutions architect to assist an enterprise customer with their migration of an ecommerce platform to Amazon Virtual Private Cloud (VPC) The previous architect has already deployed a 3- tier VPC. The configuration is as follows: VPC vpc-2f8t>C447
    IGW ig-2d8bc445
    NACL acl-2080c448
    Subnets and Route Tables:
    Web server’s subnet-258bc44d
    Application server’s subnet-248DC44c
    Database server’s subnet-9189c6f9
    Route Tables:
    rtb-2i8bc449
    rtb-238bc44b
    Associations:
    Subnet-258bc44d: rtb-2i8bc449
    Subnet-248DC44c: rtb-238bc44b
    Subnet-9189c6f9: rtb-238bc44b
    You are now ready to begin deploying EC2 instances into the VPC. Web servers must have direct access to the internet Application and database servers cannot have direct access to the internet. Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates from the Internet?

    1. Create a bastion and NAT Instance in subnet-258bc44d and add a route from rtb-238bc44b to subnet-258bc44d. (Route should point to the NAT)
    2. Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within Subnet-248DC44c. (Adding IGW to routertb-238bc44b would expose the Application and Database server to internet. Bastion and NAT should be in public subnet)
    3. Create a Bastion and NAT Instance in subnet-258bc44d. Add a route from rtb-238bc44b to igw-2d8bc445. And a new NACL that allows access between subnet-258bc44d and subnet-248bc44c. (Route should point to NAT and not Internet Gateway else it would be internet accessible.)
    4. Create a Bastion and NAT instance in subnet-258bc44d and add a route from rtb-238bc44b to the NAT instance. (Bastion and NAT should be in the public subnet. As Web Server has direct access to Internet, the subnet subnet-258bc44d should be public and Route rtb-2i8bc449 pointing to IGW. Route rtb-238bc44b for private subnets should point to NAT for outgoing internet access)
  4. You are tasked with setting up a Linux bastion host for access to Amazon EC2 instances running in your VPC. Only clients connecting from the corporate external public IP address 72.34.51.100 should have SSH access to the host. Which option will meet the customer requirement?
    1. Security Group Inbound Rule: Protocol – TCP. Port Range – 22, Source 72.34.51.100/32
    2. Security Group Inbound Rule: Protocol – UDP, Port Range – 22, Source 72.34.51.100/32
    3. Network ACL Inbound Rule: Protocol – UDP, Port Range – 22, Source 72.34.51.100/32
    4. Network ACL Inbound Rule: Protocol – TCP, Port Range-22, Source 72.34.51.100/0

AWS Consolidated Billing

AWS Consolidated Billing

  • Consolidated billing enables consolidating payments from multiple AWS accounts (Linked or Member Accounts) within the organization to a single account by designating it to be the Management or Payer Account.
  • Every organization in AWS Organizations has a management account that pays the charges of all the member accounts.
  • Consolidate billing
    • is strictly an accounting and billing feature.
    • allows receiving a combined view of charges incurred by all the associated accounts as well as each of the accounts.
    • is not a method for controlling accounts, or provisioning resources for accounts.
  • Management/Payer account is billed for all charges of the member accounts.
  • Each linked account is still an independent account in every other way.
  • AWS Organization Consolidated Billing feature does not allow the Payer account to access data belonging to the linked account owners. All Features mode need to be enabled.
  • However, access to the Payer account users can be granted through Cross-Account Access roles
  • AWS limits work on the account level only and AWS support is per account only

Consolidated Billing Process

  • AWS Organizations provides consolidated billing so that the combined costs of all the member accounts in your organization can be tracked.
  • Create an Organization
  • Create member accounts or invite existing accounts to join the organization.
  • Each month AWS charges your management account for all the member accounts in a consolidated bill.

Consolidated Billing Scenarios

  • have multiple accounts and want to get a single bill and track each account’s charges for e.g. multiple projects, each with its own AWS account or separate environments (Dev, Prod) within the same project
  • have multiple cost centers to track.
  • have acquired a project or company with its own existing AWS account and you want a consolidated bill with your other AWS accounts.

Consolidated Billing Benefits

  • One Bill
    • A single bill with a combined view of AWS costs incurred by all accounts is generated
  • Easy Tracking
    • Detailed cost reports & charges for each of the individual AWS accounts associated with the “paying account” can be easily tracked
  • Combined Usage & Volume Discounts
    • Charges might actually decrease because AWS combines usage from all the accounts to qualify you for volume pricing discounts
  • Free Tier
    • Customers that use Consolidated Billing to consolidate payment across multiple accounts will only have access to one free usage tier and it is not combined across accounts

Volume Pricing Discounts

  • For billing purposes, AWS treats all the accounts in the organization on the consolidated bill as if they were one account.
  • AWS combines the usage from all accounts to determine which volume pricing tiers to apply, giving you a lower overall price whenever possible.

Volume Discounts Example

Consolidate Billing Example

  • Example AWS Pricing – AWS charges $0.17/GB for the first 10 TB of data transfer out used, and $0.13/GB for the next 40 TB used that translates into $174.08 per TB for the first 10 TB, and $133.12 per TB for the next 40 TB
  • Usage – Bob uses 8 TB of data transfer out during the month, and Susan uses 4 TB (for a total of 12 TB used).
  • Actual Individual Bill – AWS would have charged Bob and Susan each $174.08 per TB for their usage, for a total of $2088.96
  • Volume Discount Bill – Combined 12 TB total that Bob and Susan used, would cost the paying account ($174.08 * 10 TB) + ($133.12 * 2 TB) = $1740.80 + $266.24 = $2007.04

Reserved Instances

  • All member accounts in an Organization on a consolidated bill can receive the hourly cost-benefit of Reserved Instances that are purchased by any other account.
  • The management account of an organization can turn off Reserved Instance (RI) discount and Savings Plans discount sharing for any accounts in that organization, including the management account.
  • RIs and Savings Plans discounts aren’t shared between any accounts that have sharing turned off. To share an RI or Savings Plans discount with an account, both accounts must have sharing turned on.
  • For e.g., Bob and Susan each have an account on Bob’s consolidated bill. Susan has 5 Reserved Instances of the same type, and Bob has none. During one particular hour, Susan uses 3 instances and Bob uses 6, for a total of 9 instances used on Bob’s consolidated bill. AWS will bill 5 as Reserved Instances, and the remaining 4 as normal instances.

Consolidated Billing Best Practices

  • Paying account should be used solely for accounting and billing purposes
  • Consolidated billing works best with Resource tagging, as tags are included in the detailed billing report, which enables cost to be analyzed and decomposed across multiple dimensions and aggregation levels.
  • Paying account owners should secure their accounts by using MFA (multi-factor authentication) and a strong password

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An organization is planning to create 5 different AWS accounts considering various security requirements. The organization wants to use a single payee account by using the consolidated billing option. Which of the below mentioned statements is true with respect to the above information?
    • Master (Payee) account will get only the total bill and cannot see the cost incurred by each account
    • Master (Payee) account can view only the AWS billing details of the linked accounts
    • It is not recommended to use consolidated billing since the payee account will have access to the linked accounts
    • Each AWS account needs to create an AWS billing policy to provide permission to the payee account
  2. An organization has setup consolidated billing with 3 different AWS accounts. Which of the below mentioned advantages will organization receive in terms of the AWS pricing?
    • The consolidated billing does not bring any cost advantage for the organization
    • All AWS accounts will be charged for S3 storage by combining the total storage of each account
    • EC2 instances of each account will receive a total of 750*3 micro instance hours free
    • The free usage tier for all the 3 accounts will be 3 years and not a single year
  3. An organization has added 3 of his AWS accounts to consolidated billing. One of the AWS accounts has purchased a Reserved Instance (RI) of a small instance size in the us-east-1a zone. All other AWS accounts are running instances of a small size in the same zone. What will happen in this case for the RI pricing?
    • Only the account that has purchased the RI will get the advantage of RI pricing
    • One instance of a small size and running in the us-east-1a zone of each AWS account will get the benefit of RI pricing
    • Any single instance from all the three accounts can get the benefit of AWS RI pricing if they are running in the same zone and are of the same size
    • If there are more than one instances of a small size running across multiple accounts in the same zone no one will get the benefit of RI
  4. An organization is planning to use AWS for 5 different departments. The finance department is responsible to pay for all the accounts. However, they want the cost separation for each account to map with the right cost centre. How can the finance department achieve this?
    • Create 5 separate accounts and make them a part of one consolidated billing
    • Create 5 separate accounts and use the IAM cross account access with the roles for better management
    • Create 5 separate IAM users and set a different policy for their access
    • Create 5 separate IAM groups and add users as per the department’s employees
  5. An AWS account wants to be part of the consolidated billing of his organization’s payee account. How can the owner of that account achieve this?
    • The payee account has to request AWS support to link the other accounts with his account
    • The owner of the linked account should add the payee account to his master account list from the billing console
    • The payee account will send a request to the linked account to be a part of consolidated billing (Check Process)
    • The owner of the linked account requests the payee account to add his account to consolidated billing
  6. You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal.
    • Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account.
    • Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts.
    • Create IAM users in the Master account. Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access.
    • Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts
  7. When using consolidated billing there are two account types. What are they?
    • Paying account and Linked account
    • Parent account and Child account
    • Main account and Sub account.
    • Main account and Secondary account.
  8. A customer needs corporate IT governance and cost oversight of all AWS resources consumed by its divisions. The divisions want to maintain administrative control of the discrete AWS resources they consume and keep those resources separate from the resources of other divisions. Which of the following options, when used together will support the autonomy/control of divisions while enabling corporate IT to maintain governance and cost oversight? Choose 2 answers
    • Use AWS Consolidated Billing and disable AWS root account access for the child accounts. (Need to link accounts and disabling root access is just a best practice)
    • Enable IAM cross-account access for all corporate IT administrators in each child account. (Provides IT goverance)
    • Create separate VPCs for each division within the corporate IT AWS account.
    • Use AWS Consolidated Billing to link the divisions’ accounts to a parent corporate account (Will provide cost oversight)
    • Write all child AWS CloudTrail and Amazon CloudWatch logs to each child account’s Amazon S3 ‘Log’ bucket (Preferred approach would be to store logs from multiple accounts to a single S3 bucket with CloudTrail for IT Goverance and CloudWatch alerts for Cost Oversight)
  9. An organization has 10 departments. The organization wants to track the AWS usage of each department. Which of the below mentioned options meets the requirement?
    1. Setup IAM groups for each department and track their usage
    2. Create separate accounts for each department, but use consolidated billing for payment and tracking
    3. Create separate accounts for each department and track them separately
    4. Setup IAM users for each department and track their usage

References

 

 

AWS Support Tiers – Certification

AWS Support Tiers Plan

AWS provides Four Support tiers and is per AWS Account (except Enterprise)
    • Basic
        • Customer Service: one-on-one responses to account and billing questions
        • Support forums
        • Service health checks
      • Documentation, whitepapers, and best-practice guides
    • Developer
        • All the features from Basic Support Tier
        • Best-practice guidance
        • Client-side diagnostic tools
      • Building-block architecture support: guidance on how to use AWS products, features, and services together
    • Business
        • Phone/Email/Chat support,  1 hour response time
        • All the features from Developer Support Tier
        • Use-case guidance: what AWS products, features, and services to use to best support your specific needs
        • IAM for controlling individuals’ access to AWS Support
        • AWS Trusted Advisor, which inspects customer environments and identifies opportunities to save money, close security gaps, and improve system reliability and performance
        • An API for interacting with Support Center and Trusted Advisor, allowing for automated support case management and Trusted Advisor operations
      • Third-party software support: help with EC2 instance operating systems as well as the configuration and performance of the most popular third-party software components on AWS
  • Enterprise
      • 15 min response time, dedicated Technical Account Manager
      • All the features from Business Support Tier
      • Application architecture guidance: consultative partnership supporting specific use cases and applications
      • Infrastructure event management: short-term engagement with AWS Support to partner with your technical and project resources to gain a deep understanding of your use case and provide architectural and scaling guidance for an event
      • AWS Concierge
      • Technical account manager
      • White-glove case routing
    • Management business reviews

Support plan Comparison

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What are the four levels of AWS Premium Support?
    • Basic, Developer, Business, Enterprise
    • Basic, Startup, Business, Enterprise
    • Free, Bronze, Silver, Gold
    • All support is free
  2. What is the maximum response time for a Business level Premium Support case?
    • 120 seconds
    • 1 hour
    • 10 minutes
    • 12 hours

References

AWS Security – Whitepaper – Certification

AWS Security Whitepaper

AWS Security whitepaper is one of the most important whitepaper for the Certification perspective

Shared Security Responsibility Model

In the Shared Security Responsibility Model, AWS is responsible for securing the underlying infrastructure that supports the cloud, and you’re responsible for anything you put on the cloud or connect to the cloud.
AWS Security Shared Responsibility Model

AWS Security Responsibilities

  • AWS is responsible for protecting the global infrastructure that runs all of the services offered in the AWS cloud. This infrastructure is comprised of the hardware, software, networking, and facilities that run AWS services.
  • AWS provide several reports from third-party auditors who have verified their compliance with a variety of computer security standards and regulations
  • AWS is responsible for the security configuration of its products that are considered managed services for e.g. RDS, DynamoDB
  • For Managed Services, AWS will handle basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery.

Customer Security Responsibilities

  • AWS Infrastructure as a Service (IaaS) products for e.g. EC2, VPC, S3 are completely under your control and require you to perform all of the necessary security configuration and management tasks.
  • Management of the guest OS (including updates and security patches), any application software or utilities installed on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance
  • For most of these managed services, all you have to do is configure logical access controls for the resources and protect the account credentials

AWS Global Infrastructure Security 

AWS Compliance Program

IT infrastructure that AWS provides to its customers is designed and managed in alignment with security best practices and a variety of IT security standards, including:
  • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70)
  • SOC 2
  • SOC 3
  • FISMA, DIACAP, and FedRAMP
  • DOD CSM Levels 1-5
  • PCI DSS Level 1
  • ISO 9001 / ISO 27001
  • ITAR
  • FIPS 140-2
  • MTCS Level 3
And meet several industry-specific standards, including:
  • Criminal Justice Information Services (CJIS)
  • Cloud Security Alliance (CSA)
  • Family Educational Rights and Privacy Act (FERPA)
  • Health Insurance Portability and Accountability Act (HIPAA)
  • Motion Picture Association of America (MPAA)

Physical and Environmental Security 

Storage Decommissioning

  • When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals.
  • AWS uses the techniques detailed in DoD 5220.22-M (National Industrial Security Program Operating Manual) or NIST 800-88 (Guidelines for Media Sanitization) to destroy data as part of the decommissioning process.
  • All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry-standard practices.

Network Security

Amazon Corporate Segregation

  • AWS Production network is segregated from the Amazon Corporate network and requires a separate set of credentials for logical access.
  • Amazon Corporate network relies on user IDs, passwords, and Kerberos, while the AWS Production network require SSH public-key authentication through a bastion host.

Networking Monitoring & Protection

AWS utilizes a wide variety of automated monitoring systems to provide a high level of service performance and availability. These tools monitor server and network usage, port scanning activities, application usage, and unauthorized intrusion attempts. The tools have the ability to set custom performance metrics thresholds for unusual activity.
AWS network provides protection against traditional network security issues
  1. DDOS – AWS uses proprietary DDoS mitigation techniques. Additionally, AWS’s networks are multi-homed across a number of providers to achieve Internet access diversity.
  2. Man in the Middle attacks – AWS APIs are available via SSL-protected endpoints which provide server authentication
  3. IP spoofing – AWS-controlled, host-based firewall infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own.
  4. Port Scanning – Unauthorized port scans by Amazon EC2 customers are a violation of the AWS Acceptable Use Policy. When unauthorized port scanning is detected by AWS, it is stopped and blocked. Penetration/Vulnerability testing can be performed only on your own instances, with mandatory prior approval, and must not violate the AWS Acceptable Use Policy.
  5. Packet Sniffing by other tenants – It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While you can place your interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic.

Secure Design Principles

AWS’s development process follows :-
  • Secure software development best practices, which include formal design reviews by the AWS Security Team, threat modeling, and completion of a risk assessment
  • Static code analysis tools are run as a part of the standard build process
  • Recurring penetration testing performed by carefully selected industry experts

AWS Account Security Features

AWS account security features includes credentials for access control, HTTPS endpoints for encrypted data transmission, the creation of separate IAM user accounts, user activity logging for security monitoring, and Trusted Advisor security checks

AWS Credentials

AWS IAM Credentials

Individual User Accounts

Do not use the Root account, instead create an IAM User for each user and provide them with a unique set of Credentials and grant least privilege as required to perform their job function

Secure HTTPS Access Points

Use HTTPS, provided by all AWS services, for data transmissions, which uses public-key cryptography to prevent eavesdropping, tampering, and forgery

Security Logs

Use Amazon CloudTrail which provides logs of all requests for AWS resources within the account and captures information about every API call to every AWS resource you use, including sign-in events

Trusted Advisor Security Checks

Use Trusted Advisor service which helps inspect AWS environment and provide recommendations when opportunities may exist to optimize cost, improve system performance, or close security gaps

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. In the shared security model, AWS is responsible for which of the following security best practices (check all that apply) :
    1. Penetration testing
    2. Operating system account security management (User responsibility)
    3. Threat modeling
    4. User group access management (User responsibility)
    5. Static code analysis (AWS development cycle responsibility)
  2. You are running a web-application on AWS consisting of the following components an Elastic Load Balancer (ELB) an Auto-Scaling Group of EC2 instances running Linux/PHP/Apache, and Relational DataBase Service (RDS) MySQL. Which security measures fall into AWS’s responsibility?
    1. Protect the EC2 instances against unsolicited access by enforcing the principle of least-privilege access (User responsibility)
    2. Protect against IP spoofing or packet sniffing
    3. Assure all communication between EC2 instances and ELB is encrypted (User responsibility)
    4. Install latest security patches on ELB. RDS and EC2 instances (User responsibility)
  3. In AWS, which security aspects are the customer’s responsibility? Choose 4 answers
    1. Controlling physical access to compute resources (AWS responsibility)
    2. Patch management on the EC2 instances operating system
    3. Encryption of EBS (Elastic Block Storage) volumes
    4. Life-cycle management of IAM credentials
    5. Decommissioning storage devices (AWS responsibility)
    6. Security Group and ACL (Access Control List) settings
  4. Per the AWS Acceptable Use Policy, penetration testing of EC2 instances:
    1. May be performed by AWS, and will be performed by AWS upon customer request.
    2. May be performed by AWS, and is periodically performed by AWS.
    3. Are expressly prohibited under all circumstances.
    4. May be performed by the customer on their own instances with prior authorization from AWS.
    5. May be performed by the customer on their own instances, only if performed from EC2 instances
  5. Which is an operational process performed by AWS for data security?
    1. AES-256 encryption of data stored on any shared storage device (User responsibility)
    2. Decommissioning of storage devices using industry-standard practices
    3. Background virus scans of EBS volumes and EBS snapshots (No virus scan is performed by AWS on User instances)
    4. Replication of data across multiple AWS Regions (AWS does not replicate data across regions unless done by User)
    5. Secure wiping of EBS data when an EBS volume is unmounted (data is not wiped off on EBS volume when unmounted and it can be remounted on other EC2 instance)

References

AWS S3 Data Durability

Sample Exam Question :-
A customer is leveraging Amazon Simple Storage Service in eu-west-1 to store static content for web-based property. The customer is storing objects using the Standard Storage class. Where are the customers’ objects replicated?
  1. Single facility in eu-west-1 and a single facility in eu-central-1
  2. Single facility in eu-west-1 and a single facility in us-east-1
  3. Multiple facilities in eu-west-1
  4. A single facility in eu-west-1
Answer :-
The question targets the S3 Data Durability which mentions the objects are stored redundantly across multiple facilities in the same Amazon S3 region.

Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 region. To help better ensure data durability, Amazon S3 PUT and PUT Object copy operations synchronously store your data across multiple facilities before returning SUCCESS. Once the objects are stored, Amazon S3 maintains their durability by quickly detecting and repairing any lost redundancy.

AWS EBS vs Instance Store

AWS EBS vs Instance Store

  • EC2 instances support two types for block level storage
  • EC2 Instances can be launched using either Elastic Block Store (EBS) or Instance Store volume as root volumes and additional volumes.
  • EC2 instances can be launched by choosing between AMIs backed by EC2 instance store and AMIs backed by EBS. However, AWS recommends using EBS backed AMIs because they launch faster and use persistent storage.

Instance Store (Ephemeral storage)

  • An Instance store backed instance is an EC2 instance using an Instance store as root device volume created from a template stored in S3.
  • Instance store volumes access storage from disks that are physically attached to the host computer.
  • When an Instance stored instance is launched, the image that is used to boot the instance is copied to the root volume (typically sda1).
  • Instance store provides temporary block-level storage for instances.
  • Data on an instance store volume persists only during the life of the associated instance; if an instance is stopped or terminated, any data on instance store volumes is lost.

Key points for Instance store backed Instance

  1. Boot time is slower than EBS backed volumes and usually less than 5 min
  2. Can be selected as Root Volume and attached as additional volumes
  3. Instance store backed Instances can be of a maximum 10GiB volume size
  4. Instance store volume can be attached as additional volumes only when the instance is being launched and cannot be attached once the Instance is up and running.
  5. Instance store backed Instances cannot be stopped, as when stopped and started AWS does not guarantee the instance would be launched in the same host, and hence the data is lost.
  6. Data on Instance store volume is LOST in the following scenarios:-
    • Failure of an underlying drive
    • Stopping an EBS-backed instance where instance stores are attached as additional volumes
    • Termination of the Instance
  7. Data on Instance store volume is NOT LOST when the instance is rebooted
  8. For EC2 instance store-backed instances AWS recommends to
    1. distribute the data on the instance stores across multiple AZs
    2. back up critical data from the instance store volumes to persistent storage on a regular basis.
  9. AMI creation requires the usage of AMI tools and needs to be executed from within the running instance.
  10. Instance store backed Instances cannot be upgraded

Elastic Block Store (EBS)

  • An “EBS-backed” instance means that the root device for an instance launched from the AMI is an EBS volume created from an EBS snapshot
  • An EBS volume behaves like a raw, unformatted, external block device that can be attached to a single instance and is not physically attached to the Instance host computer (more like network-attached storage).
  • Volume persists independently from the running life of an instance.
  • After an EBS volume is attached to an instance, you can use it like any other physical hard drive.
  • EBS volume can be detached from one instance and attached to another instance
  • EBS volumes can be created as encrypted volumes using the EBS encryption feature

Key points for EBS backed Instance

  1. Boot time is very fast usually less than a min
  2. Can be selected as Root Volume and attached as additional volumes
  3. EBS backed Instances can be of maximum 64TiB volume size depending upon the OS,
  4. EBS volume can be attached as additional volumes when the Instance is launched and even when the Instance is up and running
  5. Data on the EBS volume is LOST for

    1. EBS Root volume, if Delete On Termination flag is enabled, which is the default.
    2. Attached EBS volumes, if the Delete On Termination flag is enabled. It’s disabled, by default.
  6. Data on EBS volume is NOT LOST in the following scenarios:-
    • Reboot on the Instance
    • Stopping an EBS-backed instance
    • Termination of the Instance for the additional EBS volumes. Additional EBS volumes are detached with their data intact
  7. When an EBS-backed instance is in a stopped state, various instance– and volume-related tasks can be done for e.g. you can modify the properties of the instance, you can change the size of your instance or update the kernel it is using, or you can attach your root volume to a different running instance for debugging or any other purpose
  8. EBS volumes are AZ scoped and tied to a single AZ where created.
  9. EBS volumes are automatically replicated within that zone to prevent data loss due to the failure of any single hardware component
  10. AMI creation is easy using a Single command
  11. EBS backed Instances can be upgraded for instance type, Kernel, RAM disk, and user data

EBS vs Instance Store Comparision

EBS-Backed vs Instance Store-Backed

Boot Times

  • EBS-backed AMIs launch faster than EC2 instance store-backed AMIs.
  • When an EC2 instance store-backed AMI is launched, all the parts have to be retrieved from S3 before the instance is available.
  • When an EBS-backed AMI is launched, parts are lazily loaded and only the parts required to boot the instance need to be retrieved from the snapshot before the instance is available.
  • However, the performance of an instance that uses an EBS volume for its root device is slower for a short time while the remaining parts are retrieved from the snapshot and loaded into the volume.
  • When you stop and restart the instance, it launches quickly, because the state is stored in an EBS volume.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. EC2 EBS-backed (EBS root) instance is stopped, what happens to the data on any ephemeral store volumes?
    1. Data is automatically saved in an EBS volume.
    2. Data is unavailable until the instance is restarted.
    3. Data will be deleted and will no longer be accessible.
    4. Data is automatically saved as an EBS snapshot.
  2. When an EC2 instance that is backed by an S3-based AMI is terminated, what happens to the data on the root volume?
    1. Data is automatically saved as an EBS snapshot.
    2. Data is automatically saved as an EBS volume.
    3. Data is unavailable until the instance is restarted.
    4. Data is automatically deleted.
  3.  Which of the following will occur when an EC2 instance in a VPC (Virtual Private Cloud) with an associated Elastic IP is stopped and started? (Choose 2 answers)
    1. The Elastic IP will be dissociated from the instance
    2. All data on instance-store devices will be lost
    3. All data on EBS (Elastic Block Store) devices will be lost
    4. The ENI (Elastic Network Interface) is detached
    5. The underlying host for the instance is changed
  4. Which of the following provides the fastest storage medium?
    1. Amazon S3
    2. Amazon EBS using Provisioned IOPS (PIOPS)
    3. SSD Instance (ephemeral) store (SSD Instance Storage provides 100,000 IOPS on some instance types, much faster than any network-attached storage)
    4. AWS Storage Gateway

References

AWS_EC2_Root_Device_Storage