AWS S3 Subresources – Certification

AWS S3 Subresources

  • Amazon S3 Subresources provides support to store, and manage the bucket configuration information
  • S3 subresources only exist in the context of a specific bucket or object
  • S3 defines a set of subresources associated with buckets and objects.
  • S3 Subresources are subordinates to objects; that is, they do not exist on their own, they are always associated with some other entity, such as an object or a bucket
  • S3 supports various options to configure a bucket for e.g., bucket can be configured for website hosting, configuration added to manage lifecycle of objects in the bucket, and to log all access to the bucket.

Bucket Subresources

Lifecycle

Refer to My Blog Post about S3 Object Lifecycle Management

Static Website hosting

  • S3 can be used for Static Website hosting with Client side scripts.
  • S3 does not support server-side scripting
  • S3, in conjunction with Route 53, supports hosting a website at the root domain which can point to the S3 website endpoint
  • S3 website endpoints do not support https.
  • For S3 website hosting the content should be made publicly readable which can be provided using a bucket policy or an ACL on an object
  • User can configure the index, error document as well as configure the conditional routing of on object name
  • Bucket policy applies only to objects owned by the bucket owner. If your bucket contains objects not owned by the bucket owner, then public READ permission on those objects should be granted using the object ACL.
  • Requester Pays buckets or DevPay buckets do not allow access through the website endpoint. Any request to such a bucket will receive a 403 -Access Denied response

Versioning

Refer to My Blog Post about S3 Object Versioning

Policy & Access Control List (ACL)

Refer to My Blog Post about S3 Permissions

CORS (Cross Origin Resource Sharing)

  • All browsers implement the Same-Origin policy, for security reasons, where the web page from an domain can only request resources from the same domain.
  • CORS allow client web applications loaded in one domain access to the restricted resources to be requested from another domain
  • With CORS support in S3, you can selectively allow cross-origin access to your S3 resources
  • CORS configuration rules identify the origins allowed to access the bucket, the operations (HTTP methods) that would be supported for each origin, and other operation-specific information

Logging

  • Logging, disabled by default, enables tracking access requests to S3 bucket
  • Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and error code, if any.
  • Access log information can be useful in security and access audits and also help learn about the customer base and understand the S3 bill
  • S3 periodically collects access log records, consolidates the records in log files, and then uploads log files to a target bucket as log objects.
  • If logging is enabled on multiple source buckets with same target bucket, the target bucket will have access logs for all those source buckets, but each log object will report access log records for a specific source bucket

Tagging

  • S3 provides the tagging subresource to store and manage tags on a bucket
  • Cost allocation tags can be added to the bucket to categorize and track AWS costs
  • AWS can generate a cost allocation report with usage and costs aggregated by the tags applied to the buckets

Location

  • When you create a bucket, AWS region needs to be specified where the S3 bucket will be created
  • S3 stores this information in the location subresource and provides an API for retrieving this information

Notification

  • S3 notification feature enables notifications to be triggered when certain events happen in your bucket
  • Notifications are enabled at Bucket level
  • Notifications can be configured to be filtered by the prefix and suffix of the key name of objects. However, filtering rules cannot be defined with  overlapping prefixes, overlapping suffixes, or prefix and suffix overlapping
  • S3 can publish the following events
    • New Objects created event
      • Can be enabled for PUT, POST or COPY operations
      • You will not receive event notifications from failed operations
    • Object Removal event
      • Can public delete events for object deletion, version object deletion or insertion of delete marker
      • You will not receive event notifications from automatic deletes from lifecycle policies or from failed operations.
    • Reduced Redundancy Storage (RRS) object lost event
      • Can be used to reproduce/recreate the Object
  • S3 can publish events to the following destination
    • Amazon SNS topic
    • Amazon SQS queue
    • AWS Lambda
  • For S3 to be able to publish events to the destination, S3 principal should be granted necessary permissions

Cross Region Replication

  • Cross-region replication is a bucket-level feature that enables automatic, asynchronous copying of objects across buckets in different AWS regions
  • S3 can replicate all or a subset of objects with specific key name prefixes
  • S3 encrypts all data in transit across AWS regions using SSL
  • Object replicas in the destination bucket are exact replicas of the objects in the source bucket with the same key names and the same metadata.
  • Cross Region Replication can be useful for the following scenarios :-
    • Compliance requirement to have data backed up across regions
    • Minimize latency to allow users across geography to access objects
    • Operational reasons compute clusters in two different regions that analyze the same set of objects
  • Requirements
    • source and destination buckets must be versioning-enabled
    • source and destination buckets must be in different AWS regions
    • objects can be replicated from a source bucket to only one destination bucket
    • S3 must have permission to replicate objects from that source bucket to the destination bucket on your behalf.
    • If the source bucket owner also owns the object, the bucket owner has full permissions to replicate the object. If not, the source bucket owner must have permission for the S3 actions s3:GetObjectVersion and s3:GetObjectVersionACL to read the object and object ACL
    • If you are setting up cross-region replication in a cross-account scenario (where the source and destination buckets are owned by different AWS accounts), the source bucket owner must have permission to replicate objects in the destination bucket.
  • Replicated & Not Replicated
    • Any new objects created after you add a replication configuration are replicated
    • S3 does not retroactively replicate objects that existed before you added replication configuration.
    • Only Objects created with SSE-S3 are replicated using server-side encryption using the Amazon S3-managed encryption key.
    • Objects created with server-side encryption using either customer-provided (SSE-C) or AWS KMS–managed encryption (SSE-KMS) keys are not replicated
    • S3 replicates only objects in the source bucket for which the bucket owner has permission to read objects and read ACLs
    • S3 does not replicate objects in the source bucket for which the bucket owner does not have permissions.
    • Any object ACL updates are replicated, although there can be some delay before Amazon S3 can bring the two in sync. This applies only to objects created after you add a replication configuration to the bucket.
    • Updates to bucket-level S3 subresources are not replicated, allowing different bucket configurations on the source and destination buckets
    • Only customer actions are replicated & actions performed by lifecycle configuration are not replicated
    • Objects in the source bucket that are replicas, created by another cross-region replication, are not replicated.

Requester Pays

  • By default, buckets are owned by the AWS account that created it (the bucket owner) and the AWS account pays for storage costs, downloads and data transfer charges associated with the bucket.
  • Using Requester Pays subresource :-
    • Bucket owner specifies that the requester requesting the download will be charged for the download
    • However, the bucket owner still pays the storage costs
  • Enabling Requester Pays on a bucket
    • disables anonymous access to that bucket
    • does not support BitTorrent
    • does not support SOAP requests
    • cannot be enabled for end user logging bucket

Object Subresources

Torrent

  • Default distribution mechanism for S3 data is via client/server download
  • Bucket owner bears the cost of Storage as well as the request and transfer charges which can increase linearly for an popular object
  • S3 also supports the BitTorrent protocol
    • BitTorrent is an open source Internet distribution protocol
    • BitTorrent addresses this problem by recruiting the very clients that are downloading the object as distributors themselves
    • S3 bandwidth rates are inexpensive, but BitTorrent allows developers to further save on bandwidth costs for a popular piece of data by letting users download from Amazon and other users simultaneously
  • Benefit for publisher is that for large, popular files the amount of data actually supplied by S3 can be substantially lower than what it would have been serving the same clients via client/server download
  • Any object in S3 that is publicly available and can be read anonymously can be downloaded via BitTorrent
  • torrent file can be retrieved for any publicly available object by simply adding a “?torrent” query string parameter at the end of the REST GET request for the object
  • Generating the .torrent for an object takes time proportional to the size of that object, so its recommended to make a first torrent request yourself to generate the file so that subsequent requests are faster
  • Torrent are enabled only for objects that are less than 5 GB in size.
  • Torrent subresource can only be retrieve, and cannot be created, updated or deleted

Object ACL

Refer to My Blog Post about S3 Permissions

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An organization’s security policy requires multiple copies of all critical data to be replicated across at least a primary and backup data center. The organization has decided to store some critical data on Amazon S3. Which option should you implement to ensure this requirement is met?
    1. Use the S3 copy API to replicate data between two S3 buckets in different regions
    2. You do not need to implement anything since S3 data is automatically replicated between regions
    3. Use the S3 copy API to replicate data between two S3 buckets in different facilities within an AWS Region
    4. You do not need to implement anything since S3 data is automatically replicated between multiple facilities within an AWS Region
  2. A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits. Which of the following will meet the Customer requirement?
    1. Enable AWS CloudTrail to audit all Amazon S3 bucket access.
    2. Enable server access logging for all required Amazon S3 buckets
    3. Enable the Requester Pays option to track access via AWS Billing
    4. Enable Amazon S3 event notifications for Put and Post.
  3. A user is enabling a static website hosting on an S3 bucket. Which of the below mentioned parameters cannot be configured by the user?
    1. Error document
    2. Conditional error on object name
    3. Index document
    4. Conditional redirection on object name
  4. Company ABCD is running their corporate website on Amazon S3 accessed from http//www.companyabcd.com. Their marketing team has published new web fonts to a separate S3 bucket accessed by the S3 endpoint: https://s3-us-west1.amazonaws.com/abcdfonts. While testing the new web fonts, Company ABCD recognized the web fonts are being blocked by the browser. What should Company ABCD do to prevent the web fonts from being blocked by the browser?
    1. Enable versioning on the abcdfonts bucket for each web font
    2. Create a policy on the abcdfonts bucket to enable access to everyone
    3. Add the Content-MD5 header to the request for webfonts in the abcdfonts bucket from the website
    4. Configure the abcdfonts bucket to allow cross-origin requests by creating a CORS configuration
  5. Company ABCD is currently hosting their corporate site in an Amazon S3 bucket with Static Website Hosting enabled. Currently, when visitors go to http://www.companyabcd.com the index.html page is returned. Company C now would like a new page welcome.html to be returned when a visitor enters http://www.companyabcd.com in the browser. Which of the following steps will allow Company ABCD to meet this requirement? Choose 2 answers.
    1. Upload an html page named welcome.html to their S3 bucket
    2. Create a welcome subfolder in their S3 bucket
    3. Set the Index Document property to welcome.html
    4. Move the index.html page to a welcome subfolder
    5. Set the Error Document property to welcome.html

AWS S3 Permissions – Certification

S3 Permissions Overview

  • By default, all S3 buckets, objects and related subresources are private
  • User is the AWS Account or the IAM user who access the resource
  • Bucket owner is the AWS account that created a bucket
  • Object owner is the AWS account that uploads the object to a bucket, not owned by the account
  • Only the Resource owner, the AWS account that creates the resource, can access the resource
  • Resource owner can be
    • AWS account that creates the bucket or object owns those resources
    • If an IAM user creates the bucket or object, the AWS account of the IAM user owns the resource
    • If the bucket owner grants cross-account permissions to other AWS account users to upload objects to the buckets, the objects are owned by the AWS account of the user who uploaded the object and not the bucket owner except for the following conditions
      • Bucket owner can deny access to the object, as its still the bucket owner who pays for the object
      • Bucket owner can delete or apply archival rules to the object and perform restoration

S3 Permissions Classification

S3 permissions are classified into Resource based policies and User policies

User policies

  • User based policies use IAM with S3 to control the type of access a user or group of users has to specific parts of an S3 bucket the AWS account owns
  • User based policy is always attached to an User, Group or a Role, anonymous permissions cannot be granted
  • If an AWS account that owns a bucket wants to grant permission to users in its account, it can use either a bucket policy or a user policy

Resource based policies

Bucket policies and access control lists (ACLs) are resource-based because they are attached to the Amazon S3 resources

Screen Shot 2016-03-28 at 5.57.36 PM
Bucket Policy & Bucket ACL

Bucket Policies

  • Bucket policy can be used to grant cross-account access to other AWS accounts or IAM users in other accounts for the bucket and objects in it.
  • Bucket policies provide centralized, access control to buckets and objects based on a variety of conditions, including S3 operations, requesters, resources, and aspects of the request (e.g. IP address)
  • If an AWS account that owns a bucket wants to grant permission to users in its account, it can use either a bucket policy or a user policy
  • Permissions attached to a bucket apply to all of the objects in that bucket created and owned by the bucket owner
  • Policies can either add or deny permissions across all (or a subset) of objects within a bucket
  • Only the bucket owner is allowed to associate a policy with a bucket

Access Control Lists (ACLs)

  • Each bucket and object has an ACL associated with it.
  • An ACL is a list of grants identifying grantee and permission granted
  • ACLs are used to grant basic read/write permissions on resources to other AWS accounts.
  • ACL supports limited permissions set and
    • cannot grant conditional permissions, nor can you explicitly deny permissions
    • cannot be used to grant permissions for bucket subresources
  • Permission can be granted to an AWS account by the email address or the canonical user ID (is just an obfuscated Account Id). If an email address is provided, S3 will still find the canonical user ID for the user and add it to the ACL.
  • It is Recommended to use Canonical user ID as email address would not be supported
  • Bucket ACL
    • Only recommended use case for the bucket ACL is to grant write permission to S3 Log Delivery group to write access log objects to the bucket
    • Bucket ACL will help grant write permission on the bucket to the Log Delivery group if access log delivery is needed to your bucket
    • Only way you can grant necessary permissions to the Log Delivery group is via a bucket ACL
  • Object ACL
    • Object ACLs control only Object-level Permissions
    • Object ACL is the only way to manage permission to an object in the bucket not owned by the bucket owner i.e. If the bucket owner allows cross-account object uploads and if the object owner is different from the bucket owner, the only way for the object owner to grant permissions on the object is through Object ACL
    • If the Bucket and Object is owned by the same AWS account, Bucket policy can be used to manage the permissions
    • If the Object and User is owned by the same AWS account, User policy can be used to manage the permissions

Amazon S3 Request Authorization

When Amazon S3 receives a request, it must evaluate all the user policies, bucket policies and acls to determine whether to authorize or deny the request.

S3 evaluates the policies in 3 context

  • User context is basically the context in which S3 evaluates the User policy that the parent AWS account (context authority) attaches to the user
  • Bucket context is the context in which S3 evaluates the access policies owned by the bucket owner (context authority) to check if the bucket owner has not explicitly denied access to the resource
  • Object context is the context where S3 evaluates policies owned by the Object owner (context authority)

Analogy

  • Consider 3 Parents (AWS Account) A, B and C with Child (IAM User) AA, BA and CA respectively
  • Parent A owns a Toy box (Bucket) with Toy AAA and also allows toys (Objects) to be dropped and picked up
  • Parent A can grant permission (User Policy OR Bucket policy OR both) to his Child AA to access the Toy box and the toys
  • Parent A can grant permissions (Bucket policy) to Parent B (different AWS account) to drop toys into the toys box. Parent B can grant permissions (User policy) to his Child BA to drop Toy BAA
  • Parent B can grant permissions (Object ACL) to Parent A to access Toy BAA
  • Parent A can grant permissions (Bucket Policy) to Parent C to pick up the Toy AAA who in turn can grant permission (User Policy) to his Child CA to access the toy
  • Parent A can grant permission (through IAM Role) to Parent C to pick up the Toy BAA who in turn can grant permission (User Policy) to his Child CA to access the toy

Bucket Operation Authorization

Screen Shot 2016-03-28 at 6.35.36 AM

  1. If the requester is an IAM user, the user must have permission (User Policy) from the parent AWS account to which it belongs
  2. Amazon S3 evaluates a subset of policies owned by the parent account. This subset of policies includes the user policy that the parent account attaches to the user.
  3. If the parent also owns the resource in the request (in this case, the bucket), Amazon S3 also evaluates the corresponding resource policies (bucket policy and bucket ACL) at the same time.
  4. Requester must also have permissions (Bucket Policy or ACL) from the bucket owner to perform a specific bucket operation.
  5. Amazon S3 evaluates a subset of policies owned by the AWS account that owns the bucket. The bucket owner can grant permission by using a bucket policy or bucket ACL.
  6. Note that, if the AWS account that owns the bucket is also the parent account of an IAM user, then it can configure bucket permissions in a user policy or bucket policy or both

Object Operation Authorization

Screen Shot 2016-03-28 at 6.39.54 AM

  1. If the requester is an IAM user, the user must have permission (User Policy) from the parent AWS account to which it belongs.
  2. Amazon S3 evaluates a subset of policies owned by the parent account. This subset of policies includes the user policy that the parent attaches to the user.
  3. If the parent also owns the resource in the request (bucket, object), Amazon S3 evaluates the corresponding resource policies (bucket policy, bucket ACL, and object ACL) at the same time.
  4. If the parent AWS account owns the resource (bucket or object), it can grant resource permissions to its IAM user by using either the user policy or the resource policy.
  5. S3 evaluates policies owned by the AWS account that owns the bucket.
  6. If the AWS account that owns the object in the request is not same as the bucket owner, in the bucket context Amazon S3 checks the policies if the bucket owner has explicitly denied access to the object.
  7. If there is an explicit deny set on the object, Amazon S3 does not authorize the request.
  8. Requester must have permissions from the object owner (Object ACL) to perform a specific object operation.
  9. Amazon S3 evaluates the object ACL.
  10. If bucket and object owners are the same, access to the object can be granted in the bucket policy, which is evaluated at the bucket context.
  11. If the owners are different, the object owners must use an object ACL to grant permissions.
  12. If the AWS account that owns the object is also the parent account to which the IAM user belongs, it can configure object permissions in a user policy, which is evaluated at the user context.

Permission Delegation

  • If an AWS account owns a resource, it can grant those permissions to another AWS account.
  • That account can then delegate those permissions, or a subset of them, to users in the account. This is referred to as permission delegation.
  • But an account that receives permissions from another account cannot delegate permission cross-account to another AWS account.
  • If the Bucket owner wants to grant permission to the Object which does not belong to it to an other AWS account it cannot do it through cross-account permissions and need to define a IAM role which can be assumed by the AWS account to gain access

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which features can be used to restrict access to data in S3? Choose 2 answers
    1. Set an S3 ACL on the bucket or the object.
    2. Create a CloudFront distribution for the bucket.
    3. Set an S3 bucket policy.
    4. Enable IAM Identity Federation
    5. Use S3 Virtual Hosting
  2. Which method can be used to prevent an IP address block from accessing public objects in an S3 bucket?
    1. Create a bucket policy and apply it to the bucket
    2. Create a NACL and attach it to the VPC of the bucket
    3. Create an ACL and apply it to all objects in the bucket
    4. Modify the IAM policies of any users that would access the bucket
  3. A user has granted read/write permission of his S3 bucket using ACL. Which of the below mentioned options is a valid ID to grant permission to other AWS accounts (grantee. using ACL?
    1. IAM User ID
    2. S3 Secure ID
    3. Access ID
    4. Canonical user ID
  4. A root account owner has given full access of his S3 bucket to one of the IAM users using the bucket ACL. When the IAM user logs in to the S3 console, which actions can he perform?
    1. He can just view the content of the bucket
    2. He can do all the operations on the bucket
    3. It is not possible to give access to an IAM user using ACL
    4. The IAM user can perform all operations on the bucket using only API/SDK
  5. A root AWS account owner is trying to understand various options to set the permission to AWS S3. Which of the below mentioned options is not the right option to grant permission for S3?
    1. User Access Policy
    2. S3 Object Policy
    3. S3 Bucket Policy
    4. S3 ACL
  6. A system admin is managing buckets, objects and folders with AWS S3. Which of the below mentioned statements is true and should be taken in consideration by the sysadmin?
    1. Folders support only ACL
    2. Both the object and bucket can have an Access Policy but folder cannot have policy
    3. Folders can have a policy
    4. Both the object and bucket can have ACL but folders cannot have ACL
  7. A user has created an S3 bucket which is not publicly accessible. The bucket is having thirty objects which are also private. If the user wants to make the objects public, how can he configure this with minimal efforts?
    1. User should select all objects from the console and apply a single policy to mark them public
    2. User can write a program which programmatically makes all objects public using S3 SDK
    3. Set the AWS bucket policy which marks all objects as public
    4. Make the bucket ACL as public so it will also mark all objects as public
  8. You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which methods ensure that all objects uploaded to the bucket are set to public read? Choose 2 answers
    1. Set permissions on the object to public read during upload.
    2. Configure the bucket ACL to set all objects to public read.
    3. Configure the bucket policy to set all objects to public read.
    4. Use AWS Identity and Access Management roles to set the bucket to public read.
    5. Amazon S3 objects default to public read, so no action is needed.
  9. Amazon S3 doesn’t automatically give a user who creates _____ permission to perform other actions on that bucket or object.
    1. a file
    2. a bucket or object
    3. a bucket or file
    4. a object or file
  10. A root account owner is trying to understand the S3 bucket ACL. Which of the below mentioned options cannot be used to grant ACL on the object using the authorized predefined group?
    1. Authenticated user group
    2. All users group
    3. Log Delivery Group
    4. Canonical user group
  11. A user is enabling logging on a particular bucket. Which of the below mentioned options may be best suitable to allow access to the log bucket?
    1. Create an IAM policy and allow log access
    2. It is not possible to enable logging on the S3 bucket
    3. Create an IAM Role, which has access to the log bucket
    4. Provide ACL for the logging group
  12. A user is trying to configure access with S3. Which of the following options is not possible to provide access to the S3 bucket / object?
    1. Define the policy for the IAM user
    2. Define the ACL for the object
    3. Define the policy for the object
    4. Define the policy for the bucket
  13. A user is having access to objects of an S3 bucket, which is not owned by him. If he is trying to set the objects of that bucket public, which of the below mentioned options may be a right fit for this action?
    1. Make the bucket public with full access
    2. Define the policy for the bucket
    3. Provide ACL on the object
    4. Create an IAM user with permission
  14. A bucket owner has allowed another account’s IAM users to upload or access objects in his bucket. The IAM user of Account A is trying to access an object created by the IAM user of account B. What will happen in this scenario?
    1. The bucket policy may not be created as S3 will give error due to conflict of Access Rights
    2. It is not possible to give permission to multiple IAM users
    3. AWS S3 will verify proper rights given by the owner of Account A, the bucket owner as well as by the IAM user B to the object
    4. It is not possible that the IAM user of one account accesses objects of the other IAM user

AWS VPC Security – Security Group vs NACLs – Certification

AWS VPC Security Overview

  • In a VPC, both Security Groups and Network ACLs (NACLS) together help to build a layered network defense.
  • Security groups – Act as a firewall for associated Amazon instances, controlling both inbound and outbound traffic at the instance level
  • Network access control lists (NACLs) – Act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level

Security Groups vs NACLs

Security Groups

  • Acts at an Instance level and not at the subnet level.
  • Each instance within a subnet can be assigned a different set of Security groups
  • An instance can be assigned 5 security groups with each security group having 50 rules
  • Security groups allows you to add or remove rules (authorizing or revoking access) for both Inbound (ingress) and Outbound (egress) traffic to the instance
    • Default Security group allows no external inbound traffic but allows inbound traffic from instances with the same security group
    • Default Security group allows all outbound traffic
    • New Security groups start with only an outbound rule that allows all traffic to leave the instances
  • Security groups can specify only Allow rules, but not deny rules
  • Security groups can grant access to a specific CIDR range, or to another security group in the VPC or in a peer VPC (requires a VPC peering connection)
  • Security groups are evaluated as a Whole or Cumulative bunch of rules for e.g. if there is an Allow Rule for SSH from all IP addresses and an Allow Rule for SSH from a specific IP address the specific IP address would take precedence
  • Security groups are Stateful – responses to allowed inbound traffic are allowed to flow outbound regardless of outbound rules, and vice versa. Hence an Outbound rule for the response is not needed
  • Security groups are associated with ENI (network interfaces).
  • Security groups associated with the instance can be changed, which changes the security groups associated with the primary network interface (eth0) and the changes would be applicable immediately to all the instances associated with the Security group

Connection Tracking

  • As Security groups are Stateful and they use Connection tracking to track information about traffic to and from the instance.
  • Responses to inbound traffic are allowed to flow out of the instance regardless of outbound security group rules, and vice versa.
  • Connection Tracking is maintained only if there is no explicit Outbound rule for an Inbound request (and vice versa)
  • However, if there is an explicit Outbound rule for an Inbound request, the response traffic is allowed on the basis of the Outbound rule and not on the Tracking information
  • If an instance (host A) initiates traffic to host B and uses a protocol other than TCP, UDP, or ICMP, the instance’s firewall only tracks the IP address & protocol number for the purpose of allowing response traffic from host B.
  • If host B initiates traffic to the instance in a separate request within 600 seconds of the original request or response, the instance accepts it regardless of inbound security group rules, because it’s regarded as response traffic.
  • This can be controlled by modifying the security group’s outbound rules to permit only certain types of outbound traffic. Alternatively, Network ACLs (NACLs) can be used for the subnet, network ACLs are stateless and therefore do not automatically allow response traffic.

NACLs

  • A Network ACLs (NACLs) is an optional layer of security for the VPC that acts as a firewall for controlling traffic in and out of one or more subnets.
  • NACLs are not for granular control and are assigned at a Subnet level and is applicable to all the instances in that Subnet
  • Network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic
    • Default ACL allows all inbound and outbound traffic.
    • Newly created ACL denies all inbound and outbound traffic
  • A Subnet can be assigned only 1 NACLs and if not associated explicitly would be associated implicitly with the default NACL
  • Network ACL is a numbered list of rules that are evaluated in order
    starting with the lowest numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL
    for e.g. if you have a Rule No. 100 with Allow All and 110 with Deny All, the Allow All would take precedence and all the traffic will be allowed
  • Network ACLs are Stateless; responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa) for e.g. if you enable Inbound SSH on port 22 from the specific IP address, you would need to add a Outbound rule for the response as wellScreen Shot 2016-03-13 at 7.50.37 AM

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Instance A and instance B are running in two different subnets A and B of a VPC. Instance A is not able to ping instance B. What are two possible reasons for this? (Pick 2 correct answers)
    1. The routing table of subnet A has no target route to subnet B
    2. The security group attached to instance B does not allow inbound ICMP traffic
    3. The policy linked to the IAM role on instance A is not configured correctly
    4. The NACL on subnet B does not allow outbound ICMP traffic
  2. An instance is launched into a VPC subnet with the network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance’s security group is configured to allow SSH from any IP address and deny all outbound traffic. What changes need to be made to allow SSH access to the instance?
    1. The outbound security group needs to be modified to allow outbound traffic.
    2. The outbound network ACL needs to be modified to allow outbound traffic.
    3. Nothing, it can be accessed from any IP address using SSH.
    4. Both the outbound security group and outbound network ACL need to be modified to allow outbound traffic.
  3. From what services I can block incoming/outgoing IPs?
    1. Security Groups
    2. DNS
    3. ELB
    4. VPC subnet
    5. IGW
    6. NACL
  4. What is the difference between a security group in VPC and a network ACL in VPC (chose 3 correct answers)
    1. Security group restricts access to a Subnet while ACL restricts traffic to EC2
    2. Security group restricts access to EC2 while ACL restricts traffic to a subnet
    3. Security group can work outside the VPC also while ACL only works within a VPC
    4. Network ACL performs stateless filtering and Security group provides stateful filtering
    5. Security group can only set Allow rule, while ACL can set Deny rule also
  5. You are currently hosting multiple applications in a VPC and have logged numerous port scans coming in from a specific IP address block. Your security team has requested that all access from the offending IP address block be denied for the next 24 hours. Which of the following is the best method to quickly and temporarily deny access from the specified IP address block?
    1. Create an AD policy to modify Windows Firewall settings on all hosts in the VPC to deny access from the IP address block
    2. Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP address block
    3. Add a rule to all of the VPC 5 Security Groups to deny access from the IP address block
    4. Modify the Windows Firewall settings on all Amazon Machine Images (AMIs) that your organization uses in that VPC to deny access from the IP address block
  6. You have two Elastic Compute Cloud (EC2) instances inside a Virtual Private Cloud (VPC) in the same Availability Zone (AZ) but in different subnets. One instance is running a database and the other instance an application that will interface with the database. You want to confirm that they can talk to each other for your application to work properly. Which two things do we need to confirm in the VPC settings so that these EC2 instances can communicate inside the VPC? Choose 2 answers
    1. A network ACL that allows communication between the two subnets.
    2. Both instances are the same instance class and using the same Key-pair.
    3. That the default route is set to a NAT instance or Internet Gateway (IGW) for them to communicate.
    4. Security groups are set to allow the application host to talk to the database on the right port/protocol
  7. A benefits enrollment company is hosting a 3-tier web application running in a VPC on AWS, which includes a NAT (Network Address Translation) instance in the public Web tier. There is enough provisioned capacity for the expected workload tor the new fiscal year benefit enrollment period plus some extra overhead Enrollment proceeds nicely for two days and then the web tier becomes unresponsive, upon investigation using CloudWatch and other monitoring tools it is discovered that there is an extremely large and unanticipated amount of inbound traffic coming from a set of 15 specific IP addresses over port 80 from a country where the benefits company has no customers. The web tier instances are so overloaded that benefit enrollment administrators cannot even SSH into them. Which activity would be useful in defending against this attack?
    1. Create a custom route table associated with the web tier and block the attacking IP addresses from the IGW (internet Gateway)
    2. Change the EIP (Elastic IP Address) of the NAT instance in the web tier subnet and update the Main Route Table with the new EIP
    3. Create 15 Security Group rules to block the attacking IP addresses over port 80
    4. Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses
  8. Which of the following statements describes network ACLs? (Choose 2 answers)
    1. Responses to allowed inbound traffic are allowed to flow outbound regardless of outbound rules, and vice versa (are stateless)
    2. Using network ACLs, you can deny access from a specific IP range
    3. Keep network ACL rules simple and use a security group to restrict application level access
    4. NACLs are associated with a single Availability Zone (associated with Subnet)
  9. You are designing security inside your VPC. You are considering the options for establishing separate security zones and enforcing network traffic rules across different zone to limit Instances can communications.  How would you accomplish these requirements? Choose 2 answers
    1. Configure a security group for every zone. Configure a default allow all rule. Configure explicit deny rules for the zones that shouldn’t be able to communicate with one another (Security group does not allow deny rules)
    2. Configure you instances to use pre-set P addresses with an IP address range every security zone. Configure NACL to explicitly allow or deny communication between the different IP address ranges, as required for interzone communication
    3. Configure a security group for every zone. Configure allow rules only between zone that need to be able to communicate with one another. Use implicit deny all rule to block any other traffic
    4. Configure multiple subnets in your VPC, one for each zone. Configure routing within your VPC in such a way that each subnet only has routes to other subnets with which it needs to communicate, and doesn’t have routes to subnets with which it shouldn’t be able to communicate. (default routes are unmodifiable)
  10. Your entire AWS infrastructure lives inside of one Amazon VPC. You have an Infrastructure monitoring application running on an Amazon instance in Availability Zone (AZ) A of the region, and another application instance running in AZ B. The monitoring application needs to make use of ICMP ping to confirm network reachability of the instance hosting the application. Can you configure the security groups for these instances to only allow the ICMP ping to pass from the monitoring instance to the application instance and nothing else” If so how?
    1. No Two instances in two different AZ’s can’t talk directly to each other via ICMP ping as that protocol is not allowed across subnet (i.e. broadcast) boundaries (Can communicate)
    2. Yes Both the monitoring instance and the application instance have to be a part of the same security group, and that security group needs to allow inbound ICMP (Need not have to be part of same security group)
    3. Yes, The security group for the monitoring instance needs to allow outbound ICMP and the application instance’s security group needs to allow Inbound ICMP (is stateful, so just allow outbound ICMP from monitoring and inbound ICMP on monitored instance)
    4. Yes, Both the monitoring instance’s security group and the application instance’s security group need to allow both inbound and outbound ICMP ping packets since ICMP is not a connection-oriented protocol (Security groups are stateful)
  11. A user has configured a VPC with a new subnet. The user has created a security group. The user wants to configure that instances of the same subnet communicate with each other. How can the user configure this with the security group?
    1. There is no need for a security group modification as all the instances can communicate with each other inside the same subnet
    2. Configure the subnet as the source in the security group and allow traffic on all the protocols and ports
    3. Configure the security group itself as the source and allow traffic on all the protocols and ports
    4. The user has to use VPC peering to configure this
  12. You are designing a data leak prevention solution for your VPC environment. You want your VPC Instances to be able to access software depots and distributions on the Internet for product updates. The depots and distributions are accessible via third party CDNs by their URLs. You want to explicitly deny any other outbound connections from your VPC instances to hosts on the Internet. Which of the following options would you consider?
    1. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes. (Security group and NACL cannot have URLs in the rules nor does the route)
    2. Implement security groups and configure outbound rules to only permit traffic to software depots.
    3. Move all your instances into private VPC subnets remove default routes from all routing tables and add specific routes to the software depots and distributions only.
    4. Implement network access control lists to all specific destinations, with an Implicit deny as a rule.
  13. You have an EC2 Security Group with several running EC2 instances. You change the Security Group rules to allow inbound traffic on a new port and protocol, and launch several new instances in the same Security Group. The new rules apply:
    1. Immediately to all instances in the security group.
    2. Immediately to the new instances only.
    3. Immediately to the new instances, but old instances must be stopped and restarted before the new rules apply.
    4. To all instances, but it may take several minutes for old instances to see the changes.