AWS Disaster Recovery – Whitepaper

AWS Disaster Recovery Whitepaper

AWS Disaster Recovery Whitepaper is one of the very important Whitepaper for both the Associate & Professional AWS Certification exam

Disaster Recovery Overview

  • AWS Disaster Recovery whitepaper highlights AWS services and features that can be leveraged for disaster recovery (DR) processes to significantly minimize the impact on data, system, and overall business operations.
  • It outlines best practices to improve your DR processes, from minimal investments to full-scale availability and fault tolerance, and describes how AWS services can be used to reduce cost and ensure business continuity during a DR event
  • Disaster recovery (DR) is about preparing for and recovering from a disaster. Any event that has a negative impact on a company’s business continuity or finances could be termed a disaster. One of the AWS best practice is to always design your systems for failures

Disaster Recovery Key AWS services

  1. Region
    • AWS services are available in multiple regions around the globe, and the DR site location can be selected as appropriate, in addition to the primary site location
  2. Storage
    • Amazon S3
      • provides a highly durable (99.999999999%) storage infrastructure designed for mission-critical and primary data storage.
      • stores Objects redundantly on multiple devices across multiple facilities within a region
    • Amazon Glacier
      • provides extremely low-cost storage for data archiving and backup.
      • Objects are optimized for infrequent access, for which retrieval times of several (3-5) hours are adequate.
    • Amazon EBS
      • provides the ability to create point-in-time snapshots of data volumes.
      • Snapshots can then be used to create volumes and attached to running instances
    • Amazon Storage Gateway
      • a service that provides seamless and highly secure integration between on-premises IT environment and the storage infrastructure of AWS.
    • AWS Import/Export
      • accelerates moving large amounts of data into and out of AWS by using portable storage devices for transport bypassing the Internet
      • transfers data directly onto and off of storage devices by means of the high-speed internal network of Amazon
  3. Compute
    • Amazon EC2
      • provides resizable compute capacity in the cloud which can be easily created and scaled.
      • EC2 instance creation using Preconfigured AMIs
      • EC2 instances can be launched in multiple AZs, which are engineered to be insulated from failures in other AZs
  4. Networking
    • Amazon Route 53
      • is a highly available and scalable DNS web service
      • includes a number of global load-balancing capabilities that can be effective when dealing with DR scenarios for e.g. DNS endpoint health checks and the ability to failover between multiple endpoints 
    • Elastic IP
      • addresses enables masking of instance or Availability Zone failures by programmatically remapping
      • addresses are static IP addresses designed for dynamic cloud computing.
    • Elastic Load Balancing (ELB)
      • performs health checks and automatically distributes incoming application traffic across multiple EC2 instances
    • Amazon Virtual Private Cloud (Amazon VPC)
      • allows provisioning of a private, isolated section of the AWS cloud where resources can be launched in a defined virtual network
    • Amazon Direct Connect
      • makes it easy to set up a dedicated network connection from on-premises environment to AWS
  5. Databases
    • RDS, DynamoDb, Redshift provided as a fully managed RDBMS, NoSQL and data warehouse solutions which can scale up easily
    • DynamoDB offers cross region replication
    • RDS provides Multi-AZ and Read Replicas and also ability to snapshot data from one region to other
  6. Deployment Orchestration
    • CloudFormation
      • gives developers and systems administrators an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion
    • Elastic Beanstalk
      • is an easy-to-use service for deploying and scaling web applications and services
    • OpsWorks
      • is an application management service that makes it easy to deploy and operate applications of all types and sizes.
      • Environment can be defined as a series of layers, and each layer can be configured as a tier of the application.
      • has automatic host replacement, so in the event of an instance failure it will be automatically replaced.
      • can be used in the preparation phase to template the environment, and combined with AWS CloudFormation in the recovery phase.
      • Stacks can be quickly provisioned from the stored configuration to support the defined RTO.

Key factors for Disaster Planning

Disaster Recovery RTO RPO Defination

Recovery Time Objective (RTO) – The time it takes after a disruption to restore a business process to its service level, as defined by the operational level agreement (OLA) for e.g. if the RTO is 1 hour and disaster occurs @ 12:00 p.m (noon), then the DR process should restore the systems to an acceptable service level within an hour i.e. by 1:00 p.m

Recovery Point Objective (RPO) – The acceptable amount of data loss measured in time before the disaster occurs. for e.g., if a disaster occurs at 12:00 p.m (noon) and the RPO is one hour, the system should recover all data that was in the system before 11:00 a.m.

Disaster Recovery Scenarios

  • Disaster Recovery scenarios can be implemented with the Primary infrastructure running in your data center in conjunction with the AWS
  • Disaster Recovery Scenarios still apply if Primary site is running in AWS using AWS multi region feature.
  • Combination and variation of the below is always possible.

Disaster Recovery Scenarios Options

  1. Backup & Restore (Data backed up and restored)
  2. Pilot Light (Only Minimal critical functionalities)
  3. Warm Standby (Fully Functional Scaled down version)
  4. Multi-Site (Active-Active)

For the DR scenarios options, RTO and RPO reduces with an increase in Cost as you move from Backup & Restore option (left) to Multi-Site option (right)

Backup & Restore

AWS can be used to backup the data in a cost effective, durable and secure manner as well as recover the data quickly and reliably.

Backup phase

In most traditional environments, data is backed up to tape and sent off-site regularly taking longer time to restore the system in the event of a disruption or disasterBackup Restore - Backup Phase

  1. Amazon S3 can be used to backup the data and perform a quick restore and is also available from any location
  2. AWS Import/Export can be used to transfer large data sets by shipping storage devices directly to AWS bypassing the Internet
  3. Amazon Glacier can be used for archiving data, where retrieval time of several hours are adequate and acceptable
  4. AWS Storage Gateway enables snapshots (used to created EBS volumes) of the on-premises data volumes to be transparently copied into S3 for backup. It can be used either as a backup solution (Gateway-stored volumes) or as a primary data store (Gateway-cached volumes)
  5. AWS Direct connect can be used to transfer data directly from On-Premise to Amazon consistently and at high speed
  6. Snapshots of Amazon EBS volumes, Amazon RDS databases, and Amazon Redshift data warehouses can be stored in Amazon S3

Restore phase

Data backed up then can be used to quickly restore and create Compute and Database instances

Backup Restore - Recovery PhaseKey steps for Backup and Restore:
1. Select an appropriate tool or method to back up the data into AWS.
2. Ensure an appropriate retention policy for this data.
3. Ensure appropriate security measures are in place for this data, including encryption and access policies.
4. Regularly test the recovery of this data and the restoration of the system.

Pilot Light

In a Pilot Light Disaster Recovery scenario option a minimal version of an environment is always running in the cloud, which basically host the critical functionalities of the application for e.g. databases

In this approach :-

  1. Maintain a pilot light by configuring and running the most critical core elements of your system in AWS for e.g. Databases where the data needs to be replicated and kept updated.
  2. During recovery, a full-scale production environment, for e.g. application and web servers, can be rapidly provisioned (using preconfigured AMIs and EBS volume snapshots) around the critical core
  3. For Networking, either a ELB to distribute traffic to multiple instances and have DNS point to the load balancer or preallocated Elastic IP address with instances associated can be used
Preparation phase steps :
  1. Set up Amazon EC2 instances or RDS instances to replicate or mirror data critical data
  2. Ensure that all supporting custom software packages available in AWS.
  3. Create and maintain AMIs of key servers where fast recovery is required.
  4. Regularly run these servers, test them, and apply any software updates and configuration changes.
  5. Consider automating the provisioning of AWS resources.
Pilot Light Scenario - Preparation Phase

Recovery Phase steps :

  1. Start the application EC2 instances from your custom AMIs.
  2. Resize existing database/data store instances to process the increased traffic for e.g. If using RDS, it can be easily scaled vertically while EC2 instances can be easily scaled horizontally
  3. Add additional database/data store instances to give the DR site resilience in the data tier for e.g. turn on Multi-AZ for RDS to improve resilience.
  4. Change DNS to point at the Amazon EC2 servers.
  5. Install and configure any non-AMI based systems, ideally in an automated way.

Pilot Light Scenario - Recovery Phase

Warm Standby

  • In a Warm standby DR scenario a scaled-down version of a fully functional environment identical to the business critical systems is always running in the cloud
  • This setup can be used for testing, quality assurances or for internal use.
  • In case of an disaster, the system can be easily scaled up or out to handle production load.

Preparation phase steps :

  1. Set up Amazon EC2 instances to replicate or mirror data.
  2. Create and maintain AMIs for faster provisioning
  3. Run the application using a minimal footprint of EC2 instances or AWS infrastructure.
  4. Patch and update software and configuration files in line with your live environment.

Warm Standby - Preparation PhaseRecovery phase Steps:

  1. Increase the size of the Amazon EC2 fleets in service with the load balancer (horizontal scaling).
  2. Start applications on larger Amazon EC2 instance types as needed (vertical scaling).
  3. Either manually change the DNS records, or use Route 53 automated health checks to route all the traffic to the AWS environment.
  4. Consider using Auto Scaling to right-size the fleet or accommodate the increased load.
  5. Add resilience or scale up your database to guard against DR going down

Warm Standby - Recovery Phase

Multi-Site

  • Multi-Site is an active-active configuration DR approach, where in an identical solution runs on AWS as your on-site infrastructure.
  • Traffic can be equally distributed to both the infrastructure as needed by using DNS service weighted routing approach.
  • In case of a disaster the DNS can be tuned to send all the traffic to the AWS environment and the AWS infrastructure scaled accordingly.

Preparation phase steps :

  1. Set up your AWS environment to duplicate the production environment.
  2. Set up DNS weighting, or similar traffic routing technology, to distribute incoming requests to both sites.
  3. Configure automated failover to re-route traffic away from the affected site. for e.g. application to check if primary DB is available if not then redirect to the AWS DB
Multi-Site - Preparation Phase

Recovery phase steps :

  1. Either manually or by using DNS failover, change the DNS weighting so that all requests are sent to the AWS site.
  2. Have application logic for failover to use the local AWS database servers for all queries.
  3. Consider using Auto Scaling to automatically right-size the AWS fleet.Multi-Site - Recovery Phase

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of these Disaster Recovery options costs the least?
    1. Pilot Light (most systems are down and brought up only after disaster)
    2. Fully Working Low capacity Warm standby
    3. Multi site Active-Active
  2. Your company currently has a 2-tier web application running in an on-premises data center. You have experienced several infrastructure failures in the past two months resulting in significant financial losses. Your CIO is strongly agreeing to move the application to AWS. While working on achieving buy-in from the other company executives, he asks you to develop a disaster recovery plan to help improve Business continuity in the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective (RPO) of 1 hour or less. He also asks you to implement the solution within 2 weeks. Your database is 200GB in size and you have a 20Mbps Internet connection. How would you do this while minimizing costs?
    1. Create an EBS backed private AMI which includes a fresh install or your application. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3 bucket using multi-part upload (while AMI is a right approach to keep cost down, Upload to S3 very Slow)
    2. Install your application on a compute-optimized EC2 instance capable of supporting the application’s average load synchronously replicate transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection. (EC2 running in Compute Optimized as well as Direct Connect is expensive to start with also Direct Connect cannot be implemented in 2 weeks)
    3. Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection. (While VPN can be setup quickly asynchronous replication using VPN would work, running instances in DR is expensive)
    4. Create an EBS backed private AMI that includes a fresh install of your application. Develop a Cloud Formation template which includes your AMI and the required EC2. Auto-Scaling and ELB resources to support deploying the application across Multiple Availability Zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection. (Pilot Light approach with only DB running and replicate while you have preconfigured AMI and autoscaling config)
  3. You are designing an architecture that can recover from a disaster very quickly with minimum down time to the end users. Which of the following approaches is best?
    1. Leverage Route 53 health checks to automatically fail over to backup site when the primary site becomes unreachable
    2. Implement the Pilot Light DR architecture so that traffic can be processed seamlessly in case the primary site becomes unreachable
    3. Implement either Fully Working Low Capacity Standby or Multi-site Active-Active architecture so that the end users will not experience any delay even if the primary site becomes unreachable
    4. Implement multi-region architecture to ensure high availability
  4. Your customer wishes to deploy an enterprise application to AWS that will consist of several web servers, several application servers and a small (50GB) Oracle database. Information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery, whole server and whole disk restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database. Which backup architecture will meet these requirements?
    1. Backup RDS using automated daily DB backups. Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore (RDS automated backups with file-level backups can be used)
    2. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using AMIs, and supplement by copying file system data to S3 to provide file level restore (Multi-AZ is more of an Disaster recovery solution)
    3. Backup RDS using automated daily DB backups. Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore (Glacier not an option with the 2 hours RTO)
    4. Backup RDS database to S3 using Oracle RMAN. Backup the EC2 instances using AMIs, and supplement with EBS snapshots for individual volume restore. (Will use RMAN only if Database hosted on EC2 and not when using RDS)
  5. Which statements are true about the Pilot Light Disaster recovery architecture pattern?
    1. Pilot Light is a hot standby (Cold Standby)
    2. Enables replication of all critical data to AWS
    3. Very cost-effective DR pattern
    4. Can scale the system as needed to handle current production load
  6. An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes. The customer realizes that data corruption occurred roughly 1.5 hours ago. What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure?
    1. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes
    2. Use synchronous database master-slave replication between two availability zones. (Replication won’t help to backtrack and would be sync always)
    3. Take hourly DB backups to EC2 Instance store volumes with transaction logs stored In S3 every 5 minutes. (Instance store not a preferred storage)
    4. Take 15 minute DB backups stored in Glacier with transaction logs stored in S3 every 5 minutes. (Glacier does not meet the RTO)
  7. Your company’s on-premises content management system has the following architecture:
    – Application Tier – Java code on a JBoss application server
    – Database Tier – Oracle database regularly backed up to Amazon Simple Storage Service (S3) using the Oracle RMAN backup utility
    – Static Content – stored on a 512GB gateway stored Storage Gateway volume attached to the application server via the iSCSI interfaceWhich AWS based disaster recovery strategy will give you the best RTO?

    1. Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Generate an EBS volume of static content from the Storage Gateway and attach it to the JBoss EC2 server.
    2. Deploy the Oracle database on RDS. Deploy the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon Glacier. Generate an EBS volume of static content from the Storage Gateway and attach it to the JBoss EC2 server. (Glacier does help to give best RTO)
    3. Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Restore the static content by attaching an AWS Storage Gateway running on Amazon EC2 as an iSCSI volume to the JBoss EC2 server. (No need to attach the Storage Gateway as an iSCSI volume can just create a EBS volume)
    4. Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Restore the static content from an AWS Storage Gateway-VTL running on Amazon EC2 (VTL is Virtual Tape library and doesn’t fit the RTO)

References

AWS Services with Root Privileges

  • AWS provides the root or system privileges only for a limited set of services, which includes
    • Elastic Cloud Compute (EC2)
    • Elastic MapReduce (EMR)
    • Elastic BeanStalk
    • Opswork
  • AWS does not provide root privileges for managed services like RDS, DynamoDB, S3, Glacier etc
  • For RDS, if you need Admin privileges or want to use features not enabled by RDS, you can go with the Database on EC2 approach

Sample Exam Questions

  1. Which services allow the customer to retain full administrative privileges of the underlying EC2 instances? Choose 2 answers
    1. Amazon Elastic Map Reduce
    2. Elastic Load Balancing
    3. AWS Elastic Beanstalk
    4. Amazon ElastiCache
    5. Amazon Relational Database service
  2. Which for the services provide root access
    1. Elastic Beanstalk
    2. EC2
    3. Opswork
    4. DynamoDb
    5. RDS
    6. S3
  3. A client application requires operating system privileges on a relational database server. What is an appropriate configuration for highly available database architecture?
    1. A standalone Amazon EC2 instance
    2. Amazon RDS in a Multi-AZ configuration
    3. Amazon EC2 instances in a replication configuration utilizing a single Availability Zone
    4. Amazon EC2 instances in a replication configuration utilizing two different Availability Zones

AWS Autoscaling Troubleshooting

Exam Question Scenario

EC2 instances fail to launch with Autoscaling configuration

Description

  • Autoscaling configuration requires the following :-
  • Autoscaling launch configuration which allows you to select an
    • AMI
    • Instance type
    • IAM role (optional)
    • Security group
    • Key pair file
  • Autoscaling group configuration allows you to select AZ to be used to launch the EC2 instances with the selected launch configuration

Troubleshooting key points :-

  • AMI id does not exist or is still pending and cannot be used to launch instances
  • Security group provided in the launch configuration does not exist
  • Key pair associated with the EC2 instance does not exist
  • Autoscaling group not found or is incorrectly configured
  • AZ configured with the Autoscaling group is no longer supported cause it might not be available
  • Invalid EBS block device mappings
  • Instance type is not supported in the AZ
  • Capacity limits reached either cause of the restriction on the number of instance type that can be launched in a region or cause AWS is not able to provision the specified instance type in the AZ (for e.g. no more spot instances or On-demand instances availability)

References

More details @ AWS Autoscaling Developer Guide

AWS VPC Security – Security Group vs NACLs

Security Groups vs NACLs

AWS VPC Security Group vs NACLs

  • In a VPC, both Security Groups and Network ACLs (NACLS) together help to build a layered network defence.
  • Security groups – Act as a virtual firewall for associated instances, controlling both inbound and outbound traffic at the instance level
  • Network access control lists (NACLs) – Act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level

Security Groups vs NACLs

Security Groups

  • Acts at an Instance level and not at the subnet level.
  • Each instance within a subnet can be assigned a different set of Security groups
  • An instance can be assigned 5 security groups with each security group having 50 60 rules.
  • allows separate rules for inbound and outbound traffic.
  • allows adding or removing rules (authorizing or revoking access) for both Inbound (ingress) and Outbound (egress) traffic to the instance
    • Default Security group allows no external inbound traffic but allows inbound traffic from instances with the same security group
    • Default Security group allows all outbound traffic
    • New Security groups start with only an outbound rule that allows all traffic to leave the instances.
  • can specify only Allow rules, but not deny rules
  • can grant access to a specific IP, CIDR range, or to another security group in the VPC or in a peer VPC (requires a VPC peering connection)
  • are evaluated as a Whole or Cumulative bunch of rules with the most permissive rule taking precedence for e.g. if you have a rule that allows access to TCP port 22 (SSH) from IP address 203.0.113.1 and another rule that allows access to TCP port 22 from everyone, everyone has access to TCP port 22.
  • are Stateful – responses to allowed inbound traffic are allowed to flow outbound regardless of outbound rules, and vice versa. Hence an Outbound rule for the response is not needed
  • Instances associated with a security group can’t talk to each other unless rules allowing the traffic are added.
  • are associated with ENI (network interfaces).
  • are associated with the instance and can be changed, which changes the security groups associated with the primary network interface (eth0) and the changes would be applicable immediately to all the instances associated with the Security Group.

Connection Tracking

  • Security groups are Stateful as they use Connection tracking to track information about traffic to and from the instance.
  • Responses to inbound traffic are allowed to flow out of the instance regardless of outbound security group rules, and vice versa.
  • Connection Tracking is maintained only if there is no explicit Outbound rule for an Inbound request (and vice versa)
  • However, if there is an explicit Outbound rule for an Inbound request, the response traffic is allowed on the basis of the Outbound rule and not on the Tracking information
  • Tracking flow e.g.
    • If an instance (host A) initiates traffic to host B and uses a protocol other than TCP, UDP, or ICMP, the instance’s firewall only tracks the IP address & protocol number for the purpose of allowing response traffic from host B.
    • If host B initiates traffic to the instance in a separate request within 600 seconds of the original request or response, the instance accepts it regardless of inbound security group rules, because it’s regarded as response traffic.
  • This can be controlled by modifying the security group’s outbound rules to permit only certain types of outbound traffic. Alternatively, Network ACLs (NACLs) can be used for the subnet, network ACLs are stateless and therefore do not automatically allow response traffic.

Network Access Control Lists – NACLs

  • A Network ACLs (NACLs) is an optional layer of security for the VPC that acts as a firewall for controlling traffic in and out of one or more subnets.
  • are not for granular control and are assigned at a Subnet level and are applicable to all the instances in that Subnet
  • has separate inbound and outbound rules, and each rule can either allow or deny traffic
    • Default ACL allows all inbound and outbound traffic.
    • The newly created ACL denies all inbound and outbound traffic.
  • A Subnet can be assigned only 1 NACL and if not associated explicitly would be associated implicitly with the default NACL
  • can associate a network ACL with multiple subnets
  • is a numbered list of rules that are evaluated in order starting with the lowest numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL e.g. if you have a Rule No. 100 with Allow All and 110 with Deny All, the Allow All would take precedence and all the traffic will be allowed.
  • are Stateless; responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa) for e.g. if you enable Inbound SSH on port 22 from the specific IP address, you would need to add an Outbound rule for the response as well.

Security Group vs NACLs

Security Groups vs NACLs

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Instance A and instance B are running in two different subnets A and B of a VPC. Instance A is not able to ping instance B. What are two possible reasons for this? (Pick 2 correct answers)
    1. The routing table of subnet A has no target route to subnet B
    2. The security group attached to instance B does not allow inbound ICMP traffic
    3. The policy linked to the IAM role on instance A is not configured correctly
    4. The NACL on subnet B does not allow outbound ICMP traffic
  2. An instance is launched into a VPC subnet with the network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance’s security group is configured to allow SSH from any IP address and deny all outbound traffic. What changes need to be made to allow SSH access to the instance?
    1. The outbound security group needs to be modified to allow outbound traffic.
    2. The outbound network ACL needs to be modified to allow outbound traffic.
    3. Nothing, it can be accessed from any IP address using SSH.
    4. Both the outbound security group and outbound network ACL need to be modified to allow outbound traffic.
  3. From what services I can block incoming/outgoing IPs?
    1. Security Groups
    2. DNS
    3. ELB
    4. VPC subnet
    5. IGW
    6. NACL
  4. What is the difference between a security group in VPC and a network ACL in VPC (chose 3 correct answers)
    1. Security group restricts access to a Subnet while ACL restricts traffic to EC2
    2. Security group restricts access to EC2 while ACL restricts traffic to a subnet
    3. Security group can work outside the VPC also while ACL only works within a VPC
    4. Network ACL performs stateless filtering and Security group provides stateful filtering
    5. Security group can only set Allow rule, while ACL can set Deny rule also
  5. You are currently hosting multiple applications in a VPC and have logged numerous port scans coming in from a specific IP address block. Your security team has requested that all access from the offending IP address block be denied for the next 24 hours. Which of the following is the best method to quickly and temporarily deny access from the specified IP address block?
    1. Create an AD policy to modify Windows Firewall settings on all hosts in the VPC to deny access from the IP address block
    2. Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP address block
    3. Add a rule to all of the VPC 5 Security Groups to deny access from the IP address block
    4. Modify the Windows Firewall settings on all Amazon Machine Images (AMIs) that your organization uses in that VPC to deny access from the IP address block
  6. You have two Elastic Compute Cloud (EC2) instances inside a Virtual Private Cloud (VPC) in the same Availability Zone (AZ) but in different subnets. One instance is running a database and the other instance an application that will interface with the database. You want to confirm that they can talk to each other for your application to work properly. Which two things do we need to confirm in the VPC settings so that these EC2 instances can communicate inside the VPC? Choose 2 answers
    1. A network ACL that allows communication between the two subnets.
    2. Both instances are the same instance class and using the same Key-pair.
    3. That the default route is set to a NAT instance or Internet Gateway (IGW) for them to communicate.
    4. Security groups are set to allow the application host to talk to the database on the right port/protocol
  7. A benefits enrollment company is hosting a 3-tier web application running in a VPC on AWS, which includes a NAT (Network Address Translation) instance in the public Web tier. There is enough provisioned capacity for the expected workload tor the new fiscal year benefit enrollment period plus some extra overhead Enrollment proceeds nicely for two days and then the web tier becomes unresponsive, upon investigation using CloudWatch and other monitoring tools it is discovered that there is an extremely large and unanticipated amount of inbound traffic coming from a set of 15 specific IP addresses over port 80 from a country where the benefits company has no customers. The web tier instances are so overloaded that benefit enrollment administrators cannot even SSH into them. Which activity would be useful in defending against this attack?
    1. Create a custom route table associated with the web tier and block the attacking IP addresses from the IGW (internet Gateway)
    2. Change the EIP (Elastic IP Address) of the NAT instance in the web tier subnet and update the Main Route Table with the new EIP
    3. Create 15 Security Group rules to block the attacking IP addresses over port 80
    4. Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses
  8. Which of the following statements describes network ACLs? (Choose 2 answers)
    1. Responses to allowed inbound traffic are allowed to flow outbound regardless of outbound rules, and vice versa (are stateless)
    2. Using network ACLs, you can deny access from a specific IP range
    3. Keep network ACL rules simple and use a security group to restrict application level access
    4. NACLs are associated with a single Availability Zone (associated with Subnet)
  9. You are designing security inside your VPC. You are considering the options for establishing separate security zones and enforcing network traffic rules across different zone to limit Instances can communications.  How would you accomplish these requirements? Choose 2 answers
    1. Configure a security group for every zone. Configure a default allow all rule. Configure explicit deny rules for the zones that shouldn’t be able to communicate with one another (Security group does not allow deny rules)
    2. Configure you instances to use pre-set IP addresses with an IP address range every security zone. Configure NACL to explicitly allow or deny communication between the different IP address ranges, as required for interzone communication
    3. Configure a security group for every zone. Configure allow rules only between zone that need to be able to communicate with one another. Use implicit deny all rule to block any other traffic
    4. Configure multiple subnets in your VPC, one for each zone. Configure routing within your VPC in such a way that each subnet only has routes to other subnets with which it needs to communicate, and doesn’t have routes to subnets with which it shouldn’t be able to communicate. (default routes are unmodifiable)
  10. Your entire AWS infrastructure lives inside of one Amazon VPC. You have an Infrastructure monitoring application running on an Amazon instance in Availability Zone (AZ) A of the region, and another application instance running in AZ B. The monitoring application needs to make use of ICMP ping to confirm network reachability of the instance hosting the application. Can you configure the security groups for these instances to only allow the ICMP ping to pass from the monitoring instance to the application instance and nothing else” If so how?
    1. No Two instances in two different AZ’s can’t talk directly to each other via ICMP ping as that protocol is not allowed across subnet (i.e. broadcast) boundaries (Can communicate)
    2. Yes Both the monitoring instance and the application instance have to be a part of the same security group, and that security group needs to allow inbound ICMP (Need not have to be part of same security group)
    3. Yes, The security group for the monitoring instance needs to allow outbound ICMP and the application instance’s security group needs to allow Inbound ICMP (is stateful, so just allow outbound ICMP from monitoring and inbound ICMP on monitored instance)
    4. Yes, Both the monitoring instance’s security group and the application instance’s security group need to allow both inbound and outbound ICMP ping packets since ICMP is not a connection-oriented protocol (Security groups are stateful)
  11. A user has configured a VPC with a new subnet. The user has created a security group. The user wants to configure that instances of the same subnet communicate with each other. How can the user configure this with the security group?
    1. There is no need for a security group modification as all the instances can communicate with each other inside the same subnet
    2. Configure the subnet as the source in the security group and allow traffic on all the protocols and ports
    3. Configure the security group itself as the source and allow traffic on all the protocols and ports
    4. The user has to use VPC peering to configure this
  12. You are designing a data leak prevention solution for your VPC environment. You want your VPC Instances to be able to access software depots and distributions on the Internet for product updates. The depots and distributions are accessible via third party CDNs by their URLs. You want to explicitly deny any other outbound connections from your VPC instances to hosts on the Internet. Which of the following options would you consider?
    1. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes. (Security group and NACL cannot have URLs in the rules nor does the route)
    2. Implement security groups and configure outbound rules to only permit traffic to software depots.
    3. Move all your instances into private VPC subnets remove default routes from all routing tables and add specific routes to the software depots and distributions only.
    4. Implement network access control lists to all specific destinations, with an Implicit deny as a rule.
  13. You have an EC2 Security Group with several running EC2 instances. You change the Security Group rules to allow inbound traffic on a new port and protocol, and launch several new instances in the same Security Group. The new rules apply:
    1. Immediately to all instances in the security group.
    2. Immediately to the new instances only.
    3. Immediately to the new instances, but old instances must be stopped and restarted before the new rules apply.
    4. To all instances, but it may take several minutes for old instances to see the changes.

References