AWS EC2 Security

AWS EC2 Security

  • IAM helps control whether users in the organization can perform a task using specific EC2 API actions and whether they can use specific AWS resources.
  • Use IAM roles to prevent the need to share as well as manage, and rotate the security credentials that the applications use.
  • Security groups act as a virtual firewall that controls the traffic to the EC2 instances. They can help specify rules that control the inbound traffic that’s allowed to reach the instances and the outbound traffic that’s allowed to leave the instance.
  • Use AWS Systems Manager Session Manager to connect to the instance as it provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
  • Use EC2 Instance Connect to connect to your instances using Secure Shell (SSH) without the need to share and manage SSH keys.
  • Use AWS Systems Manager Run Command to automate common administrative tasks instead of opening inbound SSH ports and managing SSH keys.
  • Use Systems Manager Patch Manager can be used to automate the process of patching, installing security-related updates for both the operating system and applications.

EC2 Key Pairs

  • EC2 uses public-key cryptography to encrypt & decrypt login information
  • Public-key cryptography uses a public key to encrypt a piece of data, such as a password, then the recipient uses the private key to decrypt the data.
  • Public and private keys are known as a key pair.
  • To log in to an EC2 instance, a key pair needs to be created and specified when the instance is launched, and the private key can be used to connect to the instance.
  • Linux instances have no password, and the key pair is used for ssh log in
  • For Windows instances, the key pair can be used to obtain the administrator password and then log in using RDP
  • EC2 stores the public key only, and the private key resides with the user. EC2 doesn’t keep a copy of your private key
  • Public key content (on Linux instances) is placed in an entry within  ~/.ssh/authorized_keys at boot time and enables the user to securely access the instance without passwords
  • Public key specified for an instance when launched is also available through its instance metadata http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key
  • EC2 Security Best Practice: Store the private keys in a secure place as anyone who possesses the private key can decrypt the login information
  • Also, if the private key is lost, there is no way to recover the same.
    • For instance store, you cannot access the instance
    • For EBS-backed Linux instances, access can be regained.
      • EBS-backed instance can be stopped, its root volume detached and attached to another instance as a data volume
      • Modify the authorized_keys file, move the volume back to the original instance, and restart the instance
  • Key pair associated with the instances can either be
    • Generated by EC2
      • Keys that EC2 uses are 2048-bit SSH-2 RSA keys.
    • Created separately (using third-party tools) and Imported into EC2
      • EC2 only accepts RSA keys and does not accept DSA keys
      • Supported lengths: 1024, 2048, and 4096
  • supports five thousand key pairs per region
  • Deleting a key pair only deletes the public key and does not impact the servers already launched with the key.
  • Use AWS Systems Manager Session Manager to connect to the instance as it provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.

EC2 Security Groups

  • An EC2 instance, when launched, can be associated with one or more security groups, which acts as a virtual firewall that controls the traffic to that instance
  • Security groups help specify rules that control the inbound traffic that’s allowed to reach the instances and the outbound traffic that’s allowed to leave the instance
  • Security groups are associated with network interfaces. Changing an instance’s security groups changes the security groups associated with the primary network interface (eth0)
  • An ENI can be associated with 5 security groups and with 50 60 rules per security group
  • Rules for a security group can be modified at any time; the new rules are automatically applied to all instances associated with the security group.
  • All the rules from all associated security groups are evaluated to decide where to allow traffic to an instance
  • Security Group features
    • For the VPC default security group, it allows all inbound traffic from other instances associated with the default security group
    • By default, VPC default security groups or newly created security groups allow all outbound traffic
    • Security group rules are always permissive; deny rules can’t be created
    • Rules can be added and removed any time.
    • Any modification to the rules are automatically applied to the instances associated with the security group after a short period, depending on the connection tracking for the traffic
    • Security groups are stateful — if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. For VPC security groups, this also means that responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules
    • If multiple rules are defined for the same protocol and port, the Most permissive rule is applied for e.g. for multiple rules for tcp and port 22 for specific IP and Everyone, everyone is granted access being the most permissive rule

Connection Tracking

  • Security groups are Stateful and they use Connection tracking to track information about traffic to and from the instance.
  • This allows responses to inbound traffic to flow out of the instance regardless of outbound security group rules, and vice versa.
  • Connection Tracking is maintained only if there is no explicit Outbound rule for an Inbound request (and vice versa)
  • However, if there is an explicit Outbound rule for an Inbound request, the response traffic is allowed on the basis of the Outbound rule and not on the Tracking information
  • Any existing flow of traffic, that is tracked, is not interrupted even if the rules for the security groups are changed. To ensure traffic is immediately interrupted, use NACL as they are stateless and therefore do not allow automatic response traffic.
  • Also, If the instance (host A) initiates traffic to host B and uses a protocol other than TCP, UDP, or ICMP,  the instance’s firewall only tracks the IP address and protocol number for the purpose of allowing response traffic from host B. If host B initiates traffic to your instance in a separate request within 600 seconds of the original request or response, your instance accepts it regardless of inbound security group rules, because it’s regarded as response traffic.
  • can be controlled by modifying the security group’s outbound rules to permit only certain types of outbound traffic or using NACL

IAM with EC2

  • IAM policy can be defined to allow or deny a user access to the EC2 resources and actions
  • EC2 partially supports resource-level permissions. For some EC2 API actions, you cannot specify which resource a user is allowed to work with for that action; instead, you have to allow users to work with all resources for that action
  • IAM allows to control only what actions a user can perform on the EC2 resources but cannot be used to grant access for users to be able to access or login to the instances

EC2 with IAM Role

  • EC2 instances can be launched with IAM roles so that the applications can securely make API requests from your instances,
  • IAM roles prevent the need to share as well as manage, rotate the security credentials that the applications use.
  • IAM role can be added to an existing running EC2 instance.
  • EC2 uses an instance profile as a container for an IAM role.
    • Creation of an IAM role using the console, creates an instance profile automatically and gives it the same name as the role it corresponds to.
    • When using the AWS CLI, API, or an AWS SDK to create a role, the role and instance profile needs to be created as separate actions, and they can be given different names.
  • To launch an instance with an IAM role, the name of its instance profile needs to be specified.
  • An application on the instance can retrieve the security credentials provided by the role from the instance metadata item http://169.254.169.254/latest/meta-data/iam/security-credentials/role-name.
  • Security credentials are temporary and are rotated automatically and new credentials are made available at least five minutes prior to the expiration of the old credentials.
  • Best Practice: Always launch EC2 instance with IAM role instead of hardcoded credentials

EC2 IAM Role S3 Access

EC2 Resiliency

  • EC2 offers the following features to support your data resiliency:
    • Copying AMIs across Regions
    • Copying EBS snapshots across Regions
    • Automating EBS-backed AMIs using Data Lifecycle Manager
    • Automating EBS snapshots using Data Lifecycle Manager
    • Maintaining the health and availability of the fleet using EC2 Auto Scaling
    • Distributing incoming traffic across multiple instances in a single AZ or multiple AZs using Elastic Load Balancing

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You launch an Amazon EC2 instance without an assigned AWS identity and Access Management (IAM) role. Later, you decide that the instance should be running with an IAM role. Which action must you take in order to have a running Amazon EC2 instance with an IAM role assigned to it?
    1. Create an image of the instance, and register the image with an IAM role assigned and an Amazon EBS volume mapping.
    2. Create a new IAM role with the same permissions as an existing IAM role, and assign it to the running instance. (As per AWS latest enhancement, this is possible now)
    3. Create an image of the instance, add a new IAM role with the same permissions as the desired IAM role, and deregister the image with the new role assigned.
    4. Create an image of the instance, and use this image to launch a new instance with the desired IAM role assigned (This was correct before, as it was not possible to add an IAM role to an existing instance)
  2. What does the following command do with respect to the Amazon EC2 security groups? ec2-revoke RevokeSecurityGroupIngress
    1. Removes one or more security groups from a rule.
    2. Removes one or more security groups from an Amazon EC2 instance.
    3. Removes one or more rules from a security group
    4. Removes a security group from our account.
  3. Which of the following cannot be used in Amazon EC2 to control who has access to specific Amazon EC2 instances?
    1. Security Groups
    2. IAM System
    3. SSH keys
    4. Windows passwords
  4. You must assign each server to at least _____ security group
    1. 3
    2. 2
    3. 4
    4. 1
  5. A company is building software on AWS that requires access to various AWS services. Which configuration should be used to ensure that AWS credentials (i.e., Access Key ID/Secret Access Key combination) are not compromised?
    1. Enable Multi-Factor Authentication for your AWS root account.
    2. Assign an IAM role to the Amazon EC2 instance
    3. Store the AWS Access Key ID/Secret Access Key combination in software comments.
    4. Assign an IAM user to the Amazon EC2 Instance.
  6. Which of the following items are required to allow an application deployed on an EC2 instance to write data to a DynamoDB table? Assume that no security keys are allowed to be stored on the EC2 instance. (Choose 2 answers)
    1. Create an IAM Role that allows write access to the DynamoDB table
    2. Add an IAM Role to a running EC2 instance. (As per AWS latest enhancement, this is possible now)
    3. Create an IAM User that allows write access to the DynamoDB table.
    4. Add an IAM User to a running EC2 instance.
    5. Launch an EC2 Instance with the IAM Role included in the launch configuration (This was correct before, as it was not possible to add an IAM role to an existing instance)
  7. You have an application running on an EC2 Instance, which will allow users to download files from a private S3 bucket using a pre-assigned URL. Before generating the URL the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely?
    1. Use the AWS account access Keys the application retrieves the credentials from the source code of the application.
    2. Create a IAM user for the application with permissions that allow list access to the S3 bucket launch the instance as the IAM user and retrieve the IAM user’s credentials from the EC2 instance user data.
    3. Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role’s credentials from the EC2 Instance metadata
    4. Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.
  8. A user has created an application, which will be hosted on EC2. The application makes calls to DynamoDB to fetch certain data. The application is using the DynamoDB SDK to connect with from the EC2 instance. Which of the below mentioned statements is true with respect to the best practice for security in this scenario?
    1. The user should attach an IAM role with DynamoDB access to the EC2 instance
    2. The user should create an IAM user with DynamoDB access and use its credentials within the application to connect with DynamoDB
    3. The user should create an IAM role, which has EC2 access so that it will allow deploying the application
    4. The user should create an IAM user with DynamoDB and EC2 access. Attach the user with the application so that it does not use the root account credentials
  9. Your application is leveraging IAM Roles for EC2 for accessing object stored in S3. Which two of the following IAM policies control access to you S3 objects.
    1. An IAM trust policy allows the EC2 instance to assume an EC2 instance role.
    2. An IAM access policy allows the EC2 role to access S3 objects
    3. An IAM bucket policy allows the EC2 role to access S3 objects. (Bucket policy is defined with S3 and not with IAM)
    4. An IAM trust policy allows applications running on the EC2 instance to assume as EC2 role (Trust policy allows EC2 instance to assume the role)
    5. An IAM trust policy allows applications running on the EC2 instance to access S3 objects. (Applications can access S3 through EC2 assuming the role)
  10. You have an application running on an EC2 Instance, which will allow users to download files from a private S3 bucket using a pre-assigned URL. Before generating the URL the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely?
    1. Use the AWS account access Keys the application retrieves the credentials from the source code of the application.
    2. Create a IAM user for the application with permissions that allow list access to the S3 bucket launch the instance as the IAM user and retrieve the IAM user’s credentials from the EC2 instance user data.
    3. Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role’s credentials from the EC2 Instance metadata
    4. Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.

AWS Encrypting Data at Rest – Whitepaper – Certification

Encrypting Data at Rest

  • AWS delivers a secure, scalable cloud computing platform with high availability, offering the flexibility for you to build a wide range of applications
  • AWS allows several options for encrypting data at rest, for additional layer of security, ranging from completely automated AWS encryption solution to manual client-side options
  • Encryption requires 3 things
    • Data to encrypt
    • Encryption keys
    • Cryptographic algorithm method to encrypt the data
  • AWS provides different models for Securing data at rest on the following parameters
    • Encryption method
      • Encryption algorithm selection involves evaluating security, performance, and compliance requirements specific to your application
    • Key Management Infrastructure (KMI)
      • KMI enables managing & protecting the encryption keys from unauthorized access
      • KMI provides
        • Storage layer that protects plain text keys
        • Management layer that authorize key usage
  • Hardware Security Module (HSM)
    • Common way to protect keys in a KMI is using HSM
    • An HSM is a dedicated storage and data processing device that performs cryptographic operations using keys on the device.
    • An HSM typically provides tamper evidence, or resistance, to protect keys from unauthorized use.
    • A software-based authorization layer controls who can administer the HSM and which users or applications can use which keys in the HSM
  • AWS CloudHSM
    • AWS CloudHSM appliance has both physical and logical tamper detection and response mechanisms that trigger zeroization of the appliance.
    • Zeroization erases the HSM’s volatile memory where any keys in the process of being decrypted were stored and destroys the key that encrypts stored objects, effectively causing all keys on the HSM to be inaccessible and unrecoverable.
    • AWS CloudHSM can be used to generate and store key material and can perform encryption and decryption operations,
    • AWS CloudHSM, however, does not perform any key lifecycle management functions (e.g., access control policy, key rotation) and needs a compatible KMI.
    • KMI can be deployed either on-premises or within Amazon EC2 and can communicate to the AWS CloudHSM instance securely over SSL to help protect data and encryption keys.
    • AWS CloudHSM service uses SafeNet Luna appliances, any key management server that supports the SafeNet Luna platform can also be used with AWS CloudHSM
  • AWS Key Management Service (KMS)
    • AWS KMS is a managed encryption service that allows you to provision and use keys to encrypt data in AWS services and your applications.
    • Masters key, after creation, are designed to never be exported from the service.
    • AWS KMS gives you centralized control over who can access your master keys to encrypt and decrypt data, and it gives you the ability to audit this access.
    • Data can be sent into the KMS to be encrypted or decrypted under a specific master key under you account.
    • AWS KMS is natively integrated with other AWS services (for e.g. Amazon EBS, Amazon S3, and Amazon Redshift) and AWS SDKs to simplify encryption of your data within those services or custom applications
    • AWS KMS provides global availability, low latency, and a high level of durability for your keys.

Encryption Models in AWS

Encryption models in AWS depends on the on how you/AWS provides the encryption method and the KMI

  • You control the encryption method and the entire KMI
  • You control the encryption method, AWS provides the storage component of the KMI, and you provide the management layer of the KMI.
  • AWS controls the encryption method and the entire KMI.

Screen Shot 2016-04-08 at 7.39.04 AM

Model A: You control the encryption method and the entire KMI

  • You use your own KMI to generate, store, and manage access to keys as well as control all encryption methods in your applications
  • Proper storage, management, and use of keys to ensure the confidentiality, integrity, and availability of your data is your responsibility
  • AWS has no access to your keys and cannot perform encryption or decryption on your behalf.
  • Amazon S3
    • Encryption of the data is done before the object is sent to AWS S3
    • Encryption of the data can be done using any encryption method and the encrypted data can be uploaded using the PUT request in the Amazon S3 API
    • Key used to encrypt the data needs to be stored securely in your KMI
    • To decrypt this data, the encrypted object can be downloaded from Amazon S3 using the GET request in the Amazon S3 API and then decrypted using the key in your KMI
    • AWS provide Client-side encryption handling, where you can provide your key to the AWS S3 encryption client which will encrypt and decrypt the data on your behalf. However, AWS never has access to the keys or the unencrypted data
    • Screen Shot 2016-04-08 at 6.51.32 PM.png
  • Amazon EBS
    • Amazon Elastic Block Store (Amazon EBS) provides block-level storage volumes for use with Amazon EC2 instances. Amazon EBS volumes are network-attached, and persist independently from the life of an instance.
    • Because Amazon EBS volumes are presented to an instance as a block device, you can leverage most standard encryption tools for file system-level or block-level encryption
    • Block level encryption
      • Block level encryption tools usually operate below the file system layer using kernel space device drivers to perform encryption and decryption of data.
      • These tools are useful when you want all data written to a volume to be encrypted regardless of what directory the data is stored in
    • File System level encryption
      • File system level encryption usually works by stacking an encrypted file system on top of an existing file system.
      • This method is typically used to encrypt a specific directory
    • These solutions require you to provide keys, either manually or from your KMI.
    • Both block-level and file system-level encryption tools can only be used to encrypt data volumes that are not Amazon EBS boot volumes, as they don’t allow you to automatically make a trusted key available to the boot volume at startup
    • There are third party solutions available, which can help encrypt both the boot and data volumes as well as supplying and protecting keys
  • AWS Storage Gateway
    • AWS Storage Gateway is a service connecting an on-premises software appliance with Amazon S3. Data on disk volumes attached to the AWS Storage Gateway will be automatically uploaded to Amazon S3 based on policy
    • Encryption of the source data on the disk volumes can be either done before writing to the disk or using block level encryption on the iSCSI endpoint that AWS Storage Gateway exposes to encrypt all data on the disk volume.
  • Amazon RDS
    • Amazon RDS doesn’t expose the attached disk it uses for data storage, transparent disk encryption using techniques for EBS section cannot be applied.
    • However, individual fields data can be encrypted before the data is written to RDS and decrypted after reading it.

Model B: You control the encryption method, AWS provides the KMI storage component, and you provide the KMI management layer

  • Model B is similar to Model A where the encryption method is managed by you
  • Model B differs in the approach to Model A where the keys are maintained in AWS CloudHSM rather than than the on-premise key storage system
  • Only you have access to the cryptographic partitions within the dedicated HSM to use the keys

Screen Shot 2016-04-10 at 1.01.54 PM.png

Model C: AWS controls the encryption method and the entire KMI

  • AWS provides and manages the server-side encryption of your data, transparently managing the encryption method and the keys.
  • AWS KMS and other services that encrypt your data directly use a method called envelope encryption to provide a balance between performance and security.
  • Envelope Encryption method
    • A master key is defined either by you or AWS
    • A data key (data encryption key) is generated by the AWS service at the time when data encryption is requested
    • Data key is used to encrypt your data.
    • Data key is then encrypted with a key-encrypting key (master key) unique to the service storing your data.
    • Encrypted data key and the encrypted data are then stored by the AWS storage service on your behalf.
  • Master key (key-encrypting keys) used to encrypt data keys are stored and managed separately from the data and the data keys
  • For decryption of the data, the process is reversed. Encrypted data key is decrypted using the key-encrypting key; the data key is then used to decrypt your data
  • Authorized use of encryption keys is done automatically and is securely managed by AWS.
  • Because unauthorized access to those keys could lead to the disclosure of your data, AWS has built systems and processes with strong access controls that minimize the chance of unauthorized access and had these systems verified by third-party audits to achieve security certifications including SOC 1, 2, and 3, PCI-DSS, and FedRAMP.
  • Amazon S3
    • SSE-S3
      • AWS encrypts each object using a unique data key
      • Data key is encrypted with a periodically rotated master key managed by S3
      • Amazon S3 server-side encryption uses 256-bit Advanced Encryption Standard (AES) keys for both object and master keys
    • SSE-KMS
      • Master keys are defined and managed in KMS for your account
      • Object Encryption
        • When an object is uploaded, a request is sent to KMS to create an object key.
        • KMS generates a unique object key and encrypts it using the master key; KMS then returns this encrypted object key along with the plaintext object key to Amazon S3.
        • Amazon S3 web server encrypts your object using the plaintext object key and stores the now encrypted object (with the encrypted object key) and deletes the plaintext object key from memory.
      • Object Decryption
        • To retrieve the encrypted object, Amazon S3 sends the encrypted object key to AWS KMS.
        • AWS KMS decrypts the object key using the correct master key and returns the decrypted (plaintext) object key to S3.
        • Amazon S3 decrypts the encrypted object, with the plaintext object key, and returns it to you.
    • SSE-C
      • Amazon S3 is provided an encryption key, while uploading the object
      • Encryption key is used by Amazon S3 to encrypt your data using AES-256
      • After object encryption, Amazon S3 deletes the encryption key
      • For downloading, you need to provide the same encryption key, which AWS matches, decrypts and returns the object
  • Amazon EBS
    • When Amazon EBS volume is created, you can choose the master key in KMS to be used for encrypting the volume
    • Volume encryption
      • Amazon EC2 server sends an authenticated request to AWS KMS to create a volume key.
      • AWS KMS generates this volume key, encrypts it using the master key, and returns the plaintext volume key and the encrypted volume key to the Amazon EC2 server.
      • Plaintext volume key is stored in memory to encrypt and decrypt all data going to and from your attached EBS volume.
    • Volume decryption
      • When the encrypted volume (or any encrypted snapshots derived
        from that volume) needs to be re-attached to an instance, a call is made to AWS KMS to decrypt the encrypted volume key.
      • AWS KMS decrypts this encrypted volume key with the correct master key and returns the decrypted volume key to Amazon EC2.
  • Amazon Glacier
    • Glacier provide encryption of the data, by default
    • Before it’s written to disk, data is always automatically encrypted using 256-bit AES keys unique to the Amazon Glacier service that are stored in separate systems under AWS control
  • AWS Storage Gateway
    • AWS Storage Gateway transfers your data to AWS over SSL
    • AWS Storage Gateway stores data encrypted at rest in Amazon S3 or Amazon Glacier using their respective server side encryption schemes.
  • Amazon RDS – Oracle
    • Oracle Advanced Security option for Oracle on Amazon RDS can be used to leverage the native Transparent Data Encryption (TDE) and Native Network Encryption (NNE) features
    • Oracle encryption module creates data and key-encrypting keys to encrypt the database
    • Key-encrypting keys specific to your Oracle instance on Amazon RDS are themselves encrypted by a periodically rotated 256-bit AES master key.
    • Master key is unique to the Amazon RDS service and is stored in separate systems under AWS control
  • Amazon RDS -SQL server
    • Transparent Data Encryption (TDE) can be provisioned for Microsoft SQL Server on Amazon RDS.
    • SQL Server encryption module creates data and keyencrypting keys to encrypt the database.
    • Key-encrypting keys specific to your SQL Server instance on Amazon RDS are themselves encrypted by a periodically rotated, regional 256-bit AES master key
    • Master key is unique to the Amazon RDS service and is stored in separate systems under AWS control

Sample Exam Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. How can you secure data at rest on an EBS volume?
    1. Encrypt the volume using the S3 server-side encryption service
    2. Attach the volume to an instance using EC2’s SSL interface.
    3. Create an IAM policy that restricts read and write access to the volume.
    4. Write the data randomly instead of sequentially.
    5. Use an encrypted file system on top of the EBS volume
  2. Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest? (Choose 3 answers)
    1. Implement third party volume encryption tools
    2. Do nothing as EBS volumes are encrypted by default
    3. Encrypt data inside your applications before storing it on EBS
    4. Encrypt data using native data encryption drivers at the file system level
    5. Implement SSL/TLS for all services running on the server
  3. A company is storing data on Amazon Simple Storage Service (S3). The company’s security policy mandates that data is encrypted at rest. Which of the following methods can achieve this? Choose 3 answers
    1. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys
    2. Use Amazon S3 server-side encryption with customer-provided keys
    3. Use Amazon S3 server-side encryption with EC2 key pair.
    4. Use Amazon S3 bucket policies to restrict access to the data at rest.
    5. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key
    6. Use SSL to encrypt the data while in transit to Amazon S3.
  4. Which 2 services provide native encryption
    1. Amazon EBS
    2. Amazon Glacier
    3. Amazon Redshift (is optional)
    4. Amazon RDS (is optional)
    5. Amazon Storage Gateway
  5. With which AWS services CloudHSM can be used (select 2)
    1. S3
    2. DynamoDb
    3. RDS
    4. ElastiCache
    5. Amazon Redshift

References

AWS Interaction Tools – Certification

AWS Interaction Tools Overview

AWS in API driven and AWS Interaction Tools allow plenty of options to enable interaction with its services and includes.

AWS Management console

  • AWS Management console is a graphical user interface to access AWS
  • AWS management console requires credentials in the form of User Name and Password to be able to login and uses Query APIs underlying for its interaction with AWS

AWS Command Line Interface (CLI)

  • AWS Command Line Interface (CLI) is a unified tool that provides a consistent interface for interacting with all parts of AWS
  • Provides commands for a broad set of AWS products, and is supported on Windows, Mac, and Linux
  • CLI required Access key & Secret key credentials and uses Query APIs underlying for its interaction with AWS
  • CLI construct and send requests to AWS for you, and as part of that process, they sign the requests using an access key that you provide.
  • CLI also take care of many of the connection details, such as calculating signatures, handling request retries, and error handling.

Software Development Kit (SDKs)

  • Software Development Kits (SDKs) simplify using AWS services in your applications with an API tailored to your programming language or platform
  • SDKs currently support a wide range of languages which include Java, PHP, Ruby, Python, .Net, GO, Node.js etc
  • SDKs construct and send requests to AWS for you, and as part of that process, they sign the requests using an access key that you provide.
  • SDKs also take care of many of the connection details, such as calculating signatures, handling request retries, and error handling.

Query APIs

  • Query APIs provides HTTP or HTTPS requests that use the HTTP verb GET or POST and a Query parameter named “Action”
  • CLI required Access key & Secret key credentials for its interaction
  • Query APIs is the core of all the access tools and requires you to calculate signatures and attach them to the request

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. REST or Query requests are HTTP or HTTPS requests that use an HTTP verb (such as GET or POST) and a parameter named Action or Operation that specifies the API you are calling.
    1. FALSE
    2. TRUE (Refer link)
  2. Through which of the following interfaces is AWS Identity and Access Management available?
    A) AWS Management Console
    B) Command line interface (CLI)
    C) IAM Query API
    D) Existing libraries

    1. Only through Command line interface (CLI)
    2. A, B and C
    3. A and C
    4. All of the above
  3. Which of the following programming languages have an officially supported AWS SDK? Choose 2 answers
    1. PHP
    2. Pascal
    3. Java
    4. SQL
    5. Perl
  4. HTTP Query-based requests are HTTP requests that use the HTTP verb GET or POST and a Query parameter named_____________.
    1. Action
    2. Value
    3. Reset
    4. Retrieve

AWS DDoS Resiliency – Best Practices – Whitepaper

AWS DDoS Resiliency Whitepaper

  • Denial of Service (DoS) is an attack, carried out by a single attacker, which attempts to make a website or application unavailable to the end users.
  • Distributed Denial of Service (DDoS) is an attack, carried out by multiple attackers either controlled or compromised by a group of collaborators, which generates a flood of requests to the application making in unavailable to the legitimate end users

Mitigation techniques

Minimize the Attack Surface Area

  • This is all all about reducing the attack surface, the different Internet entry points, that allows access to your application
  • Strategy to minimize the Attack surface area
    • reduce the number of necessary Internet entry points,
    • don’t expose back end servers,
    • eliminate non-critical Internet entry points,
    • separate end user traffic from management traffic,
    • obfuscate necessary Internet entry points to the level that untrusted end users cannot access them, and
    • decouple Internet entry points to minimize the effects of attacks.
  • Benefits
    • Minimizes the effective attack vectors and targets
    • Less to monitor and protect
  • Strategy can be achieved using AWS Virtual Private Cloud (VPC)
    • helps define a logically isolated virtual network within the AWS
    • provides ability to create Public & Private Subnets to launch the internet facing and non-public facing instances accordingly
    • provides NAT gateway which allows instances in the private subnet to have internet access without the need to launch them in public subnets with Public IPs
    • allows creation of Bastion host which can be used to connect to instances in the private subnets
    • provides the ability to configure security groups for instances and NACLs for subnets, which act as a firewall, to control and limit outbound and inbound traffic

VPC Architecture

Be Ready to Scale to Absorb the Attack

  • DDOS mainly targets to load the systems till the point they cannot handle the load and are rendered unusable.
  • Scaling out Benefits
    • help build a resilient architecture
    • makes the attacker work harder
    • gives you time to think, analyze and adapt
  • AWS provided services :-
    • Auto Scaling & ELB
      • Horizontal scaling using Auto Scaling with ELB
      • Auto Scaling allows instances to be added and removed as the demand changes
      • ELB helps distribute the traffic across multiple EC2 instances while acting as a Single point of contact.
      • Auto Scaling automatically registers and deregisters EC2 instances with the ELB during scale out and scale in events
    • EC2 Instance
      • Vertical scaling can be achieved by using appropriate EC2 instance types for e.g. EBS optimized or ones with 10 gigabyte network connectivity to handle the load
    • Enhanced Networking
      • Use Instances with Enhanced Networking capabilities which can provide high packet-per-second performance, low latency networking, and improved scalability
    • Amazon CloudFront
      • CloudFront is a CDN, acts as a proxy between end users and the Origin servers, and helps distribute content to the end users without sending traffic to the Origin servers.
      • CloudFront has the inherent ability to help mitigate against both infrastructure and some application layer DDoS attacks by dispersing the traffic across multiple locations.
      • AWS has multiple Internet connections for capacity and redundancy at each location, which allows it to isolate attack traffic while serving content to legitimate end users
      • CloudFront also has filtering capabilities to ensure that only valid TCP connections and HTTP requests are made while dropping invalid requests. This takes the burden of handling invalid traffic (commonly used in UDP & SYN floods, and slow reads) off the origin.
    • Route 53
      • DDOS attacks are also targeted towards DNS, cause if the DNS is unavailable your application is effectively unavailable.
      • AWS Route 53 is highly available and scalable DNS service and have capabilities to ensure access to the application even when under DDOS attack
        • Shuffle Sharding – Shuffle sharding is similar to the concept of database sharding, where horizontal partitions of data are spread across separate database servers to spread load and provide redundancy. Similarly, Amazon Route 53 uses shuffle sharding to spread DNS requests over numerous PoPs, thus providing multiple paths and routes for your application.
        • Anycast Routing – Anycast routing increases redundancy by advertising the same IP address from multiple PoPs. In the event that a DDoS attack overwhelms one endpoint, shuffle sharding isolate failures while providing additional routes to your infrastructure.

Safeguard Exposed & Hard to Scale Expensive Resources

  • If entry points cannot be limited, additional measures to restrict access and protect those entry points without interrupting legitimate end user traffic
  • AWS provided services :-
    • CloudFront
      • CloudFront can restrict access to content using Geo Restriction and Origin Access Identity
      • With Geo Restriction, access can be restricted to a set of whitelisted countries or prevent access from a set of black listed countries
      • Origin Access Identity is the CloudFront special user which allows access to the resources only through CloudFront while denying direct access to the origin content for e.g. if S3 is the Origin for CloudFront, S3 can be configured to allow access only from OAI and hence deny direct access
    • Route 53
      • Route 53 provides two features Alias Record sets & Private DNS to make it easier to scale infrastructure and respond to DDoS attacks
    • WAF
      • WAFs act as filters that apply a set of rules to web traffic. Generally, these rules cover exploits like cross-site scripting (XSS) and SQL injection (SQLi) but can also help build resiliency against DDoS by mitigating HTTP GET or POST floods
      • WAF provides a lot of features like
        • OWASP Top 10
        • HTTP rate limiting (where only a certain number of requests are allowed per user in a timeframe),
        • Whitelist or blacklist (customizable rules)
        • inspect and identify requests with abnormal patterns,
        • CAPTCHA etc
      • To prevent WAF from being a Single point of failure, a WAF sandwich pattern can be implemented where an autoscaled WAF sits between the Internet and Internal Load Balancer

DDOS Resiliency - WAF Sandwich Architecture

Learn Normal Behavior

  • Understand the normal usual levels and Patterns of traffic for your application and use that as a benchmark for identifying abnormal level of traffic or resource spikes patterns
  • Benefits
    • allows one to spot abnormalities
    • configure Alarms with accurate thresholds
    • assists with generating forensic data
  • AWS provided services for tracking
    • AWS CloudWatch monitoring
      • CloudWatch can be used to monitor your infrastructure and applications running in AWS. Amazon CloudWatch can collect metrics, log files, and set alarms for when these metrics have passed predetermined thresholds
    • VPC Flow Logs
      • Flow logs helps capture traffic to the Instances in an VPC and can be used to understand the pattern

Create a Plan for Attacks

  • Have a plan in place before an attack, which ensures that:
    • Architecture has been validated and techniques selected work for the infrastructure
    • Costs for increased resiliency have been evaluated and the goals of your defense are understood
    • Contact points have been identified

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are designing a social media site and are considering how to mitigate distributed denial-of-service (DDoS) attacks. Which of the below are viable mitigation techniques? (Choose 3 answers)
    1. Add multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network bandwidth.
    2. Use dedicated instances to ensure that each instance has the maximum performance possible.
    3. Use an Amazon CloudFront distribution for both static and dynamic content.
    4. Use an Elastic Load Balancer with auto scaling groups at the web app and Amazon Relational Database Service (RDS) tiers
    5. Add alert Amazon CloudWatch to look for high Network in and CPU utilization.
    6. Create processes and capabilities to quickly add and remove rules to the instance OS firewall.
  2. You’ve been hired to enhance the overall security posture for a very large e-commerce site. They have a well architected multi-tier application running in a VPC that uses ELBs in front of both the web and the app tier with static assets served directly from S3. They are using a combination of RDS and DynamoDB for their dynamic data and then archiving nightly into S3 for further processing with EMR. They are concerned because they found questionable log entries and suspect someone is attempting to gain unauthorized access. Which approach provides a cost effective scalable mitigation to this kind of attack?
    1. Recommend that they lease space at a DirectConnect partner location and establish a 1G DirectConnect connection to their VPC they would then establish Internet connectivity into their space, filter the traffic in hardware Web Application Firewall (WAF). And then pass the traffic through the DirectConnect connection into their application running in their VPC. (Not cost effective)
    2. Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier subnet. (does not protect against new source)
    3. Add a WAF tier by creating a new ELB and an AutoScaling group of EC2 Instances running a host-based WAF. They would redirect Route 53 to resolve to the new WAF tier ELB. The WAF tier would their pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group
    4. Remove all but TLS 1.2 from the web tier ELB and enable Advanced Protocol Filtering This will enable the ELB itself to perform WAF functionality. (No advanced protocol filtering in ELB)

References

DDOS Whitepaper

 

 

AWS ELB Monitoring

AWS ELB Monitoring

  • Elastic Load Balancing publishes data points to CloudWatch about the load balancers and back-end instances
  • Elastic Load Balancing reports metrics to CloudWatch only when requests are flowing through the load balancer.
    • If there are requests flowing through the load balancer, Elastic Load Balancing measures and sends its metrics in 60-second intervals.
    • If there are no requests flowing through the load balancer or no data for a metric, the metric is not reported.

CloudWatch Metrics

  • HealthyHostCount, UnHealthyHostCount
    • Number of healthy and unhealthy instances registered with the load balancer.
    • Most useful statistics are average, min, and max
  • RequestCount
    • Number of requests completed or connections made during the specified interval (1 or 5 minutes).
    • Most useful statistic is sum
  • Latency
    • Time elapsed, in seconds, after the request leaves the load balancer until the headers of the response are received.
    • Most useful statistic is average
  • SurgeQueueLength
    • Total number of requests that are pending routing.
    • Load balancer queues a request if it is unable to establish a connection with a healthy instance in order to route the request.
    • Maximum size of the queue is 1,024. Additional requests are rejected when the queue is full.
    • Most useful statistic is max, because it represents the peak of queued requests.
  • SpilloverCount
    • The total number of requests that were rejected because the surge queue is full. Should ideally be 0
    • Most useful statistic is sum.
  • HTTPCode_ELB_4XX, HTTPCode_ELB_5XX
    • Client and Server error code generated by the load balancer
    • Most useful statistic is sum.
  • HTTPCode_Backend_2XX, HTTPCode_Backend_3XX, HTTPCode_Backend_4XX, HTTPCode_Backend_5XX
    • Number of HTTP response codes generated by registered instances
    • Most useful statistic is sum.

Elastic Load Balancer Access Logs

  • Elastic Load Balancing provides access logs that capture detailed information about all requests sent to your load balancer.
  • Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses.
  • Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket
  • Access logging is disabled by default and can be enabled without any additional charge. You are only charged for S3 storage

CloudTrail Logs

  • AWS CloudTrail can be used to capture all calls to the Elastic Load Balancing API made by or on behalf of your AWS account and either made using Elastic Load Balancing API directly or indirectly through the AWS Management Console or AWS CLI
  • CloudTrail stores the information as log files in an Amazon S3 bucket that you specify.
  • Logs collected by CloudTrail can be used to monitor the activity of your load balancers and determine what API call was made, what source IP address was used, who made the call, when it was made, and so on

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An admin is planning to monitor the ELB. Which of the below mentioned services does not help the admin capture the monitoring information about the ELB activity
    1. ELB Access logs
    2. ELB health check
    3. CloudWatch metrics
    4. ELB API calls with CloudTrail
  2. A customer needs to capture all client connection information from their load balancer every five minutes. The company wants to use this data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements?
    1. Enable AWS CloudTrail for the load balancer.
    2. Enable access logs on the load balancer.
    3. Install the Amazon CloudWatch Logs agent on the load balancer.
    4. Enable Amazon CloudWatch metrics on the load balancer
  3. Your supervisor has requested a way to analyze traffic patterns for your application. You need to capture all connection information from your load balancer every 10 minutes. Pick a solution from below. Choose the correct answer:
    1. Enable access logs on the load balancer
    2. Create a custom metric CloudWatch filter on your load balancer
    3. Use a CloudWatch Logs Agent
    4. Use AWS CloudTrail with your load balancer

References

Elastic Load Balance developer guide

AWS Bastion Host

Bastion Host Overview

  • Bastion means a structure for Fortification to protect things behind it
  • In AWS, a Bastion host (also referred to as a Jump server) can be used to securely access instances in the private subnets.
  • Bastion host launched in the Public subnets would act as a primary access point from the Internet and acts as a proxy to other instances.

Bastion Host

Key points

  • Bastion host is deployed in the Public subnet and acts as a proxy or a gateway between you and your instances
  • Bastion host is a security measure that helps to reduce attack on your infrastructure and you have to concentrate to hardening a single layer
  • Bastion host allows you to login to instances in the Private subnet securely without having to store the private keys on the Bastion host (using ssh-agent forwarding or RDP gateways)
  • Bastion host security can be further tightened to allow SSH/RDP access from specific trusted IPs or corporate IP ranges
  • Bastion host for your AWS infrastructure shouldn’t be used for any other purpose, as that could open unnecessary security holes
  • Security for all the Instances in the private subnet should be hardened to accept SSH/RDP connections only from the Bastion host
  • Deploy a Bastion host within each Availability Zone for HA, cause if the Bastion instance or the AZ hosting the Bastion server goes down the ability to connect to your private instances is lost completely

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A customer is running a multi-tier web application farm in a virtual private cloud (VPC) that is not connected to their corporate network. They are connecting to the VPC over the Internet to manage all of their Amazon EC2 instances running in both the public and private subnets. They have only authorized the bastion-security-group with Microsoft Remote Desktop Protocol (RDP) access to the application instance security groups, but the company wants to further limit administrative access to all of the instances in the VPC. Which of the following Bastion deployment scenarios will meet this requirement?
    1. Deploy a Windows Bastion host on the corporate network that has RDP access to all instances in the VPC.
    2. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow SSH access to the bastion from anywhere.
    3. Deploy a Windows Bastion host with an Elastic IP address in the private subnet, and restrict RDP access to the bastion from only the corporate public IP addresses.
    4. Deploy a Windows Bastion host with an auto-assigned Public IP address in the public subnet, and allow RDP access to the bastion from only the corporate public IP addresses.
  2. You are designing a system that has a Bastion host. This component needs to be highly available without human intervention. Which of the following approaches would you select?
    1. Run the bastion on two instances one in each AZ
    2. Run the bastion on an active Instance in one AZ and have an AMI ready to boot up in the event of failure
    3. Configure the bastion instance in an Auto Scaling group Specify the Auto Scaling group to include multiple AZs but have a min-size of 1 and max-size of 1
    4. Configure an ELB in front of the bastion instance
  3. You’ve been brought in as solutions architect to assist an enterprise customer with their migration of an ecommerce platform to Amazon Virtual Private Cloud (VPC) The previous architect has already deployed a 3- tier VPC. The configuration is as follows: VPC vpc-2f8t>C447
    IGW ig-2d8bc445
    NACL acl-2080c448
    Subnets and Route Tables:
    Web server’s subnet-258bc44d
    Application server’s subnet-248DC44c
    Database server’s subnet-9189c6f9
    Route Tables:
    rtb-2i8bc449
    rtb-238bc44b
    Associations:
    Subnet-258bc44d: rtb-2i8bc449
    Subnet-248DC44c: rtb-238bc44b
    Subnet-9189c6f9: rtb-238bc44b
    You are now ready to begin deploying EC2 instances into the VPC. Web servers must have direct access to the internet Application and database servers cannot have direct access to the internet. Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates from the Internet?

    1. Create a bastion and NAT Instance in subnet-258bc44d and add a route from rtb-238bc44b to subnet-258bc44d. (Route should point to the NAT)
    2. Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within Subnet-248DC44c. (Adding IGW to routertb-238bc44b would expose the Application and Database server to internet. Bastion and NAT should be in public subnet)
    3. Create a Bastion and NAT Instance in subnet-258bc44d. Add a route from rtb-238bc44b to igw-2d8bc445. And a new NACL that allows access between subnet-258bc44d and subnet-248bc44c. (Route should point to NAT and not Internet Gateway else it would be internet accessible.)
    4. Create a Bastion and NAT instance in subnet-258bc44d and add a route from rtb-238bc44b to the NAT instance. (Bastion and NAT should be in the public subnet. As Web Server has direct access to Internet, the subnet subnet-258bc44d should be public and Route rtb-2i8bc449 pointing to IGW. Route rtb-238bc44b for private subnets should point to NAT for outgoing internet access)
  4. You are tasked with setting up a Linux bastion host for access to Amazon EC2 instances running in your VPC. Only clients connecting from the corporate external public IP address 72.34.51.100 should have SSH access to the host. Which option will meet the customer requirement?
    1. Security Group Inbound Rule: Protocol – TCP. Port Range – 22, Source 72.34.51.100/32
    2. Security Group Inbound Rule: Protocol – UDP, Port Range – 22, Source 72.34.51.100/32
    3. Network ACL Inbound Rule: Protocol – UDP, Port Range – 22, Source 72.34.51.100/32
    4. Network ACL Inbound Rule: Protocol – TCP, Port Range-22, Source 72.34.51.100/0

AWS Consolidated Billing

AWS Consolidated Billing

  • Consolidated billing enables consolidating payments from multiple AWS accounts (Linked or Member Accounts) within the organization to a single account by designating it to be the Management or Payer Account.
  • Every organization in AWS Organizations has a management account that pays the charges of all the member accounts.
  • Consolidate billing
    • is strictly an accounting and billing feature.
    • allows receiving a combined view of charges incurred by all the associated accounts as well as each of the accounts.
    • is not a method for controlling accounts, or provisioning resources for accounts.
  • Management/Payer account is billed for all charges of the member accounts.
  • Each linked account is still an independent account in every other way.
  • AWS Organization Consolidated Billing feature does not allow the Payer account to access data belonging to the linked account owners. All Features mode need to be enabled.
  • However, access to the Payer account users can be granted through Cross-Account Access roles
  • AWS limits work on the account level only and AWS support is per account only

Consolidated Billing Process

  • AWS Organizations provides consolidated billing so that the combined costs of all the member accounts in your organization can be tracked.
  • Create an Organization
  • Create member accounts or invite existing accounts to join the organization.
  • Each month AWS charges your management account for all the member accounts in a consolidated bill.

Consolidated Billing Scenarios

  • have multiple accounts and want to get a single bill and track each account’s charges for e.g. multiple projects, each with its own AWS account or separate environments (Dev, Prod) within the same project
  • have multiple cost centers to track.
  • have acquired a project or company with its own existing AWS account and you want a consolidated bill with your other AWS accounts.

Consolidated Billing Benefits

  • One Bill
    • A single bill with a combined view of AWS costs incurred by all accounts is generated
  • Easy Tracking
    • Detailed cost reports & charges for each of the individual AWS accounts associated with the “paying account” can be easily tracked
  • Combined Usage & Volume Discounts
    • Charges might actually decrease because AWS combines usage from all the accounts to qualify you for volume pricing discounts
  • Free Tier
    • Customers that use Consolidated Billing to consolidate payment across multiple accounts will only have access to one free usage tier and it is not combined across accounts

Volume Pricing Discounts

  • For billing purposes, AWS treats all the accounts in the organization on the consolidated bill as if they were one account.
  • AWS combines the usage from all accounts to determine which volume pricing tiers to apply, giving you a lower overall price whenever possible.

Volume Discounts Example

Consolidate Billing Example

  • Example AWS Pricing – AWS charges $0.17/GB for the first 10 TB of data transfer out used, and $0.13/GB for the next 40 TB used that translates into $174.08 per TB for the first 10 TB, and $133.12 per TB for the next 40 TB
  • Usage – Bob uses 8 TB of data transfer out during the month, and Susan uses 4 TB (for a total of 12 TB used).
  • Actual Individual Bill – AWS would have charged Bob and Susan each $174.08 per TB for their usage, for a total of $2088.96
  • Volume Discount Bill – Combined 12 TB total that Bob and Susan used, would cost the paying account ($174.08 * 10 TB) + ($133.12 * 2 TB) = $1740.80 + $266.24 = $2007.04

Reserved Instances

  • All member accounts in an Organization on a consolidated bill can receive the hourly cost-benefit of Reserved Instances that are purchased by any other account.
  • The management account of an organization can turn off Reserved Instance (RI) discount and Savings Plans discount sharing for any accounts in that organization, including the management account.
  • RIs and Savings Plans discounts aren’t shared between any accounts that have sharing turned off. To share an RI or Savings Plans discount with an account, both accounts must have sharing turned on.
  • For e.g., Bob and Susan each have an account on Bob’s consolidated bill. Susan has 5 Reserved Instances of the same type, and Bob has none. During one particular hour, Susan uses 3 instances and Bob uses 6, for a total of 9 instances used on Bob’s consolidated bill. AWS will bill 5 as Reserved Instances, and the remaining 4 as normal instances.

Consolidated Billing Best Practices

  • Paying account should be used solely for accounting and billing purposes
  • Consolidated billing works best with Resource tagging, as tags are included in the detailed billing report, which enables cost to be analyzed and decomposed across multiple dimensions and aggregation levels.
  • Paying account owners should secure their accounts by using MFA (multi-factor authentication) and a strong password

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An organization is planning to create 5 different AWS accounts considering various security requirements. The organization wants to use a single payee account by using the consolidated billing option. Which of the below mentioned statements is true with respect to the above information?
    • Master (Payee) account will get only the total bill and cannot see the cost incurred by each account
    • Master (Payee) account can view only the AWS billing details of the linked accounts
    • It is not recommended to use consolidated billing since the payee account will have access to the linked accounts
    • Each AWS account needs to create an AWS billing policy to provide permission to the payee account
  2. An organization has setup consolidated billing with 3 different AWS accounts. Which of the below mentioned advantages will organization receive in terms of the AWS pricing?
    • The consolidated billing does not bring any cost advantage for the organization
    • All AWS accounts will be charged for S3 storage by combining the total storage of each account
    • EC2 instances of each account will receive a total of 750*3 micro instance hours free
    • The free usage tier for all the 3 accounts will be 3 years and not a single year
  3. An organization has added 3 of his AWS accounts to consolidated billing. One of the AWS accounts has purchased a Reserved Instance (RI) of a small instance size in the us-east-1a zone. All other AWS accounts are running instances of a small size in the same zone. What will happen in this case for the RI pricing?
    • Only the account that has purchased the RI will get the advantage of RI pricing
    • One instance of a small size and running in the us-east-1a zone of each AWS account will get the benefit of RI pricing
    • Any single instance from all the three accounts can get the benefit of AWS RI pricing if they are running in the same zone and are of the same size
    • If there are more than one instances of a small size running across multiple accounts in the same zone no one will get the benefit of RI
  4. An organization is planning to use AWS for 5 different departments. The finance department is responsible to pay for all the accounts. However, they want the cost separation for each account to map with the right cost centre. How can the finance department achieve this?
    • Create 5 separate accounts and make them a part of one consolidated billing
    • Create 5 separate accounts and use the IAM cross account access with the roles for better management
    • Create 5 separate IAM users and set a different policy for their access
    • Create 5 separate IAM groups and add users as per the department’s employees
  5. An AWS account wants to be part of the consolidated billing of his organization’s payee account. How can the owner of that account achieve this?
    • The payee account has to request AWS support to link the other accounts with his account
    • The owner of the linked account should add the payee account to his master account list from the billing console
    • The payee account will send a request to the linked account to be a part of consolidated billing (Check Process)
    • The owner of the linked account requests the payee account to add his account to consolidated billing
  6. You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal.
    • Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account.
    • Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts.
    • Create IAM users in the Master account. Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access.
    • Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts
  7. When using consolidated billing there are two account types. What are they?
    • Paying account and Linked account
    • Parent account and Child account
    • Main account and Sub account.
    • Main account and Secondary account.
  8. A customer needs corporate IT governance and cost oversight of all AWS resources consumed by its divisions. The divisions want to maintain administrative control of the discrete AWS resources they consume and keep those resources separate from the resources of other divisions. Which of the following options, when used together will support the autonomy/control of divisions while enabling corporate IT to maintain governance and cost oversight? Choose 2 answers
    • Use AWS Consolidated Billing and disable AWS root account access for the child accounts. (Need to link accounts and disabling root access is just a best practice)
    • Enable IAM cross-account access for all corporate IT administrators in each child account. (Provides IT goverance)
    • Create separate VPCs for each division within the corporate IT AWS account.
    • Use AWS Consolidated Billing to link the divisions’ accounts to a parent corporate account (Will provide cost oversight)
    • Write all child AWS CloudTrail and Amazon CloudWatch logs to each child account’s Amazon S3 ‘Log’ bucket (Preferred approach would be to store logs from multiple accounts to a single S3 bucket with CloudTrail for IT Goverance and CloudWatch alerts for Cost Oversight)
  9. An organization has 10 departments. The organization wants to track the AWS usage of each department. Which of the below mentioned options meets the requirement?
    1. Setup IAM groups for each department and track their usage
    2. Create separate accounts for each department, but use consolidated billing for payment and tracking
    3. Create separate accounts for each department and track them separately
    4. Setup IAM users for each department and track their usage

References

 

 

AWS Support Tiers – Certification

AWS Support Tiers Plan

AWS provides Four Support tiers and is per AWS Account (except Enterprise)
    • Basic
        • Customer Service: one-on-one responses to account and billing questions
        • Support forums
        • Service health checks
      • Documentation, whitepapers, and best-practice guides
    • Developer
        • All the features from Basic Support Tier
        • Best-practice guidance
        • Client-side diagnostic tools
      • Building-block architecture support: guidance on how to use AWS products, features, and services together
    • Business
        • Phone/Email/Chat support,  1 hour response time
        • All the features from Developer Support Tier
        • Use-case guidance: what AWS products, features, and services to use to best support your specific needs
        • IAM for controlling individuals’ access to AWS Support
        • AWS Trusted Advisor, which inspects customer environments and identifies opportunities to save money, close security gaps, and improve system reliability and performance
        • An API for interacting with Support Center and Trusted Advisor, allowing for automated support case management and Trusted Advisor operations
      • Third-party software support: help with EC2 instance operating systems as well as the configuration and performance of the most popular third-party software components on AWS
  • Enterprise
      • 15 min response time, dedicated Technical Account Manager
      • All the features from Business Support Tier
      • Application architecture guidance: consultative partnership supporting specific use cases and applications
      • Infrastructure event management: short-term engagement with AWS Support to partner with your technical and project resources to gain a deep understanding of your use case and provide architectural and scaling guidance for an event
      • AWS Concierge
      • Technical account manager
      • White-glove case routing
    • Management business reviews

Support plan Comparison

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What are the four levels of AWS Premium Support?
    • Basic, Developer, Business, Enterprise
    • Basic, Startup, Business, Enterprise
    • Free, Bronze, Silver, Gold
    • All support is free
  2. What is the maximum response time for a Business level Premium Support case?
    • 120 seconds
    • 1 hour
    • 10 minutes
    • 12 hours

References

AWS Security – Whitepaper – Certification

AWS Security Whitepaper

AWS Security whitepaper is one of the most important whitepaper for the Certification perspective

Shared Security Responsibility Model

In the Shared Security Responsibility Model, AWS is responsible for securing the underlying infrastructure that supports the cloud, and you’re responsible for anything you put on the cloud or connect to the cloud.
AWS Security Shared Responsibility Model

AWS Security Responsibilities

  • AWS is responsible for protecting the global infrastructure that runs all of the services offered in the AWS cloud. This infrastructure is comprised of the hardware, software, networking, and facilities that run AWS services.
  • AWS provide several reports from third-party auditors who have verified their compliance with a variety of computer security standards and regulations
  • AWS is responsible for the security configuration of its products that are considered managed services for e.g. RDS, DynamoDB
  • For Managed Services, AWS will handle basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery.

Customer Security Responsibilities

  • AWS Infrastructure as a Service (IaaS) products for e.g. EC2, VPC, S3 are completely under your control and require you to perform all of the necessary security configuration and management tasks.
  • Management of the guest OS (including updates and security patches), any application software or utilities installed on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance
  • For most of these managed services, all you have to do is configure logical access controls for the resources and protect the account credentials

AWS Global Infrastructure Security 

AWS Compliance Program

IT infrastructure that AWS provides to its customers is designed and managed in alignment with security best practices and a variety of IT security standards, including:
  • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70)
  • SOC 2
  • SOC 3
  • FISMA, DIACAP, and FedRAMP
  • DOD CSM Levels 1-5
  • PCI DSS Level 1
  • ISO 9001 / ISO 27001
  • ITAR
  • FIPS 140-2
  • MTCS Level 3
And meet several industry-specific standards, including:
  • Criminal Justice Information Services (CJIS)
  • Cloud Security Alliance (CSA)
  • Family Educational Rights and Privacy Act (FERPA)
  • Health Insurance Portability and Accountability Act (HIPAA)
  • Motion Picture Association of America (MPAA)

Physical and Environmental Security 

Storage Decommissioning

  • When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals.
  • AWS uses the techniques detailed in DoD 5220.22-M (National Industrial Security Program Operating Manual) or NIST 800-88 (Guidelines for Media Sanitization) to destroy data as part of the decommissioning process.
  • All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry-standard practices.

Network Security

Amazon Corporate Segregation

  • AWS Production network is segregated from the Amazon Corporate network and requires a separate set of credentials for logical access.
  • Amazon Corporate network relies on user IDs, passwords, and Kerberos, while the AWS Production network require SSH public-key authentication through a bastion host.

Networking Monitoring & Protection

AWS utilizes a wide variety of automated monitoring systems to provide a high level of service performance and availability. These tools monitor server and network usage, port scanning activities, application usage, and unauthorized intrusion attempts. The tools have the ability to set custom performance metrics thresholds for unusual activity.
AWS network provides protection against traditional network security issues
  1. DDOS – AWS uses proprietary DDoS mitigation techniques. Additionally, AWS’s networks are multi-homed across a number of providers to achieve Internet access diversity.
  2. Man in the Middle attacks – AWS APIs are available via SSL-protected endpoints which provide server authentication
  3. IP spoofing – AWS-controlled, host-based firewall infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own.
  4. Port Scanning – Unauthorized port scans by Amazon EC2 customers are a violation of the AWS Acceptable Use Policy. When unauthorized port scanning is detected by AWS, it is stopped and blocked. Penetration/Vulnerability testing can be performed only on your own instances, with mandatory prior approval, and must not violate the AWS Acceptable Use Policy.
  5. Packet Sniffing by other tenants – It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While you can place your interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic.

Secure Design Principles

AWS’s development process follows :-
  • Secure software development best practices, which include formal design reviews by the AWS Security Team, threat modeling, and completion of a risk assessment
  • Static code analysis tools are run as a part of the standard build process
  • Recurring penetration testing performed by carefully selected industry experts

AWS Account Security Features

AWS account security features includes credentials for access control, HTTPS endpoints for encrypted data transmission, the creation of separate IAM user accounts, user activity logging for security monitoring, and Trusted Advisor security checks

AWS Credentials

AWS IAM Credentials

Individual User Accounts

Do not use the Root account, instead create an IAM User for each user and provide them with a unique set of Credentials and grant least privilege as required to perform their job function

Secure HTTPS Access Points

Use HTTPS, provided by all AWS services, for data transmissions, which uses public-key cryptography to prevent eavesdropping, tampering, and forgery

Security Logs

Use Amazon CloudTrail which provides logs of all requests for AWS resources within the account and captures information about every API call to every AWS resource you use, including sign-in events

Trusted Advisor Security Checks

Use Trusted Advisor service which helps inspect AWS environment and provide recommendations when opportunities may exist to optimize cost, improve system performance, or close security gaps

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. In the shared security model, AWS is responsible for which of the following security best practices (check all that apply) :
    1. Penetration testing
    2. Operating system account security management (User responsibility)
    3. Threat modeling
    4. User group access management (User responsibility)
    5. Static code analysis (AWS development cycle responsibility)
  2. You are running a web-application on AWS consisting of the following components an Elastic Load Balancer (ELB) an Auto-Scaling Group of EC2 instances running Linux/PHP/Apache, and Relational DataBase Service (RDS) MySQL. Which security measures fall into AWS’s responsibility?
    1. Protect the EC2 instances against unsolicited access by enforcing the principle of least-privilege access (User responsibility)
    2. Protect against IP spoofing or packet sniffing
    3. Assure all communication between EC2 instances and ELB is encrypted (User responsibility)
    4. Install latest security patches on ELB. RDS and EC2 instances (User responsibility)
  3. In AWS, which security aspects are the customer’s responsibility? Choose 4 answers
    1. Controlling physical access to compute resources (AWS responsibility)
    2. Patch management on the EC2 instances operating system
    3. Encryption of EBS (Elastic Block Storage) volumes
    4. Life-cycle management of IAM credentials
    5. Decommissioning storage devices (AWS responsibility)
    6. Security Group and ACL (Access Control List) settings
  4. Per the AWS Acceptable Use Policy, penetration testing of EC2 instances:
    1. May be performed by AWS, and will be performed by AWS upon customer request.
    2. May be performed by AWS, and is periodically performed by AWS.
    3. Are expressly prohibited under all circumstances.
    4. May be performed by the customer on their own instances with prior authorization from AWS.
    5. May be performed by the customer on their own instances, only if performed from EC2 instances
  5. Which is an operational process performed by AWS for data security?
    1. AES-256 encryption of data stored on any shared storage device (User responsibility)
    2. Decommissioning of storage devices using industry-standard practices
    3. Background virus scans of EBS volumes and EBS snapshots (No virus scan is performed by AWS on User instances)
    4. Replication of data across multiple AWS Regions (AWS does not replicate data across regions unless done by User)
    5. Secure wiping of EBS data when an EBS volume is unmounted (data is not wiped off on EBS volume when unmounted and it can be remounted on other EC2 instance)

References

AWS S3 Data Durability

Sample Exam Question :-
A customer is leveraging Amazon Simple Storage Service in eu-west-1 to store static content for web-based property. The customer is storing objects using the Standard Storage class. Where are the customers’ objects replicated?
  1. Single facility in eu-west-1 and a single facility in eu-central-1
  2. Single facility in eu-west-1 and a single facility in us-east-1
  3. Multiple facilities in eu-west-1
  4. A single facility in eu-west-1
Answer :-
The question targets the S3 Data Durability which mentions the objects are stored redundantly across multiple facilities in the same Amazon S3 region.

Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 region. To help better ensure data durability, Amazon S3 PUT and PUT Object copy operations synchronously store your data across multiple facilities before returning SUCCESS. Once the objects are stored, Amazon S3 maintains their durability by quickly detecting and repairing any lost redundancy.