Google Cloud Storage Security

Google Cloud Storage Security

Google Cloud Storage Security includes controlling access using

  • Uniform Bucket or File-grained ACL access control policies
  • Data encryption at rest and transit
  • Retention policies and Retention Policy Locks
  • Signed URLs

GCS Access Control

  • Cloud Storage offers two systems for granting users permission to access the buckets and objects: IAM and Access Control Lists (ACLs)
  • IAM and ACLs can be used on the same resource, Cloud Storage grants the broader permission set on the resource
  • Cloud Storage access control can be performed using
    • Uniform (recommended)
      • Uniform bucket-level access allows using IAM alone to manage permissions.
      • IAM applies permissions to all the objects contained inside the bucket or groups of objects with common name prefixes.
      • IAM also allows using features that are not available when working with ACLs, such as IAM Conditions and Cloud Audit Logs.
      • Enabling uniform bucket-level access disables ACLs, but it can be reversed before 90 days
    • Fine-grained
      • Fine-grained option enables using IAM and Access Control Lists (ACLs) together to manage permissions.
      • ACLs are a legacy access control system for Cloud Storage designed for interoperability with S3.
      • Access and apply permissions can be specified at both the bucket level and per individual object.
  • Objects in the bucket can be made public using ACLs AllUsers:R or IAM allUsers:objectViewer permissions

Data Encryption

  • Cloud Storage always encrypts the data on the server-side, before it is written to disk, at no additional charge.
  • Cloud supports the following encryption
    • Server-side encryption: encryption that occurs after Cloud Storage receives the data, but before the data is written to disk and stored.
      • Google-managed encryption keys
        • Cloud Storage always encrypts the data on the server-side, before it is written to disk
        • Cloud Storage manages server-side encryption keys using the same hardened key management systems, including strict key access controls and auditing.
        • Cloud Storage encrypts user data at rest using AES-256.
        • Data is automatically decrypted when read by an authorized user
      • Customer-supplied encryption keys
        • customers create and manage their own encryption keys.
        • customer keys can be provided via customer-side using  encryption_key=[YOUR_ENCRYPTION_KEY] .boto configuration file
      • Customer-managed encryption keys
        • customers manage their own encryption keys generated by Cloud Key Management Service (KMS)
        • Cloud Storage does not permanently store the key on Google’s servers or otherwise manage your key.
        • Customer provides the key for each GCS operation, and the key is purged from Google’s servers after the operation is complete
        • Cloud Storage stores only a cryptographic hash of the key so that future requests can be validated against the hash.
        • The key cannot be recovered from this hash, and the hash cannot be used to decrypt the data.
    • Client-side encryption: encryption that occurs before data is sent to Cloud Storage, encrypted at the client-side. This data also undergoes server-side encryption.
  • Cloud Storage supports Transport Layer Security, commonly known as TLS or HTTPS for data encryption in transit

Signed URLs

  • Signed URLs provide time-limited read or write access to an object through a generated URL.
  • Anyone having access to the URL can access the object for the duration of time specified, regardless of whether or not they have a Google account.

Signed Policy Documents

  • Signed policy documents help specify what can be uploaded to a bucket.
  • Policy documents allow greater control over size, content type, and other upload characteristics than signed URLs, and can be used by website owners to allow visitors to upload files to Cloud Storage.

Retention Policies

  • Retention policy on a bucket ensures that all current and future objects in the bucket cannot be deleted or replaced until they reach the defined age
  • Retention policy can be applied when creating a bucket or to an existing bucket
  • Retention policy retroactively applies to existing objects in the bucket as well as new objects added to the bucket.

Retention Policy Locks

  • Retention policy locks will lock a retention policy on a bucket, which prevents the policy from ever being removed or the retention period from ever being reduced (although it can be increased)
  • Once a retention policy is locked, the bucket cannot be deleted until every object in the bucket has met the retention period.
  • Locking a retention policy is irreversible

Bucket Lock

  • Bucket Lock feature provides immutable storage i.e. Write Once Read Many (WORM) on Cloud Storage
  • Bucket Lock feature allows configuring a data retention policy for a bucket that governs how long objects in the bucket must be retained
  • Bucket Lock feature also locks the data retention policy, permanently preventing the policy from being reduced or removed.
  • Bucket Lock can help with regulatory, legal, and compliance requirements

Object Holds

  • Object holds, when set on individual objects, prevents the object from being deleted or replaced, however allows metadata to be edited.
  • Cloud Storage offers the following types of holds:
    • Event-based holds.
    • Temporary holds.
  • When an object is stored in a bucket without a retention policy, both hold types behave exactly the same.
  • When an object is stored in a bucket with a retention policy, the hold types have different effects on the object when the hold is released:
    • An event-based hold resets the object’s time in the bucket for the purposes of the retention period.
    • A temporary hold does not affect the object’s time in the bucket for the purposes of the retention period.

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You have an object in a Cloud Storage bucket that you want to share with an external company. The object contains sensitive data. You want access to the content to be removed after four hours. The external company does not have a Google account to which you can grant specific user-based access privileges. You want to use the most secure method that requires the fewest steps. What should you do?
    1. Create a signed URL with a four-hour expiration and share the URL with the company.
    2. Set object access to “public” and use object lifecycle management to remove the object after four hours.
    3. Configure the storage bucket as a static website and furnish the object’s URL to the company. Delete the object from the storage bucket after four hours.
    4. Create a new Cloud Storage bucket specifically for the external company to access. Copy the object to that bucket. Delete the bucket after four hours have passed

AWS Elastic Block Store Storage – EBS

EC2 Elastic Block Storage – EBS

  • Elastic Block Storage – EBS provides highly available, reliable, durable, block-level storage volumes that can be attached to an EC2 instance.
  • EBS as a primary storage device is recommended for data that requires frequent and granular updates e.g. running a database or filesystem.
  • An EBS volume
    • behaves like a raw, unformatted, external block device that can be attached to a single EC2 instance at a time.
    • persists independently from the running life of an instance.
    • is Zonal and can be attached to any instance within the same Availability Zone and can be used like any other physical hard drive.
    • is particularly well-suited for use as the primary storage for file systems, databases, or any applications that require fine granular updates and access to raw, unformatted, block-level storage.

Elastic Block Storage Features

  • EBS Volumes are created in a specific Availability Zone and can be attached to any instance in that same AZ.
  • Volumes can be backed up by creating a snapshot of the volume, which is stored in S3.
  • Volumes can be created from a snapshot that can be attached to another instance within the same region.
  • Volumes can be made available outside of the AZ by creating and restoring the snapshot to a new volume anywhere in that region.
  • Snapshots can also be copied to other regions and then restored to new volumes, making it easier to leverage multiple AWS regions for geographical expansion, data center migration, and disaster recovery.
  • Volumes allow encryption using the EBS encryption feature. All data stored at rest, disk I/O, and snapshots created from the volume are encrypted.
  • Encryption occurs on the EC2 instance, providing encryption of data-in-transit from EC2 to the EBS volume.
  • Elastic Volumes help easily adapt the volumes as the needs of the applications change. Elastic Volumes allow you to dynamically increase capacity, tune performance, and change the type of any new or existing current generation volume with no downtime or performance impact.
  • You can dynamically increase size, modify the provisioned IOPS capacity, and change volume type on live production volumes.
  • General Purpose (SSD) volumes support up to 10,000 16000 IOPS and 160 250 MB/s of throughput and Provisioned IOPS (SSD) volumes support up to 20,000 64000 IOPS and 320 1000 MB/s of throughput.
  • EBS Magnetic volumes can be created from 1 GiB to 1 TiB in size; EBS General Purpose (SSD) and Provisioned IOPS (SSD) volumes can be created up to 16 TiB in size.

EBS Benefits

  • Data Availability
    • Data is automatically replicated in an Availability Zone to prevent data loss due to the failure of any single hardware component.
  • Data Persistence
    • persists independently of the running life of an EC2 instance
    • persists when an instance is stopped, started, or rebooted
    • Root volume is deleted, by default, on Instance termination but the behaviour can be changed using the DeleteOnTermination flag
    • All attached volumes persist, by default, on instance termination
  • Data Encryption
    • can be encrypted by the EBS encryption feature
    • uses 256-bit AES-256 and an Amazon-managed key infrastructure.
    • Encryption occurs on the server that hosts the EC2 instance, providing encryption of data-in-transit from the EC2 instance to EBS storage
    • Snapshots of encrypted EBS volumes are automatically encrypted.
  • Snapshots
    • provides the ability to create snapshots (backups) of any EBS volume and write a copy of the data in the volume to S3, where it is stored redundantly in multiple Availability Zones.
    • can be used to create new volumes, increase the size of the volumes or replicate data across Availability Zones or Regions.
    • are incremental backups and store only the data that was changed from the time the last snapshot was taken.
    • Snapshot size can probably be smaller than the volume size as the data is compressed before being saved to S3.
    • Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume.

EBS Volume Types

Refer blog post @ EBS Volume Types

EBS Volume

EBS Volume Creation

  • Creating New volumes
    • Completely new from console or command line tools and can then be attached to an EC2 instance in the same Availability Zone.
  • Restore volume from Snapshots
    • Volumes can also be restored from previously created snapshots
    • New volumes created from existing snapshots are loaded lazily in the background.
    • There is no need to wait for all of the data to transfer from S3 to the volume before the attached instance can start accessing the volume and all its data.
    • If the instance accesses the data that hasn’t yet been loaded, the volume immediately downloads the requested data from S3, and continues loading the rest of the data in the background.
    • Volumes restored from encrypted snapshots are always encrypted, by default.
  • Volumes can be created and attached to a running EC2 instance by specifying a block device mapping

EBS Volume Detachment

  • EBS volumes can be detached from an instance explicitly or by terminating the instance.
  • EBS root volumes can be detached by stopping the instance.
  • EBS data volumes, attached to a running instance, can be detached by unmounting the volume from the instance first.
  • If the volume is detached without being unmounted, it might result in the volume being stuck in a busy state and could possibly damage the file system or the data it contains.
  • EBS volume can be force detached from an instance, using the Force Detach option, but it might lead to data loss or a corrupted file system as the instance does not get an opportunity to flush file system caches or file system metadata.
  • Charges are still incurred for the volume after its detachment

EBS Volume Deletion

  • EBS volume deletion would wipe out its data and the volume can’t be attached to any instance. However, it can be backed up before deletion using EBS snapshots

EBS Volume Resize

  • EBS Elastic Volumes can be modified to increase the volume size, change the volume type, or adjust the performance of your EBS volumes.
  • If the instance supports Elastic Volumes, changes can be performed without detaching the volume or restarting the instance.

EBS Volume Snapshots

Refer blog post @ EBS Snapshot

EBS Encryption

  • EBS volumes can be created and attached to a supported instance type and support the following types of data
    • Data at rest
    • All disk I/O i.e All data moving between the volume and the instance
    • All snapshots created from the volume
    • All volumes created from those snapshots
  • Encryption occurs on the servers that host EC2 instances, providing encryption of data-in-transit from EC2 instances to EBS storage.
  • EBS encryption is supported with all EBS volume types (gp2, io1, st1, and sc1), and has the same IOPS performance on encrypted volumes as with unencrypted volumes, with a minimal effect on latency
  • EBS encryption is only available on select instance types.
  • Volumes created from encrypted snapshots and snapshots of encrypted volumes are automatically encrypted using the same encryption key.
  • EBS encryption uses AWS KMS customer master keys (CMK) when creating encrypted volumes and any snapshots created from the encrypted volumes.
  • EBS volumes can be encrypted using either
    • a default CMK created for you automatically.
    • a CMK that you created separately using AWS KMS, giving you more flexibility, including the ability to create, rotate, disable, define access controls, and audit the encryption keys used to protect your data.
  • Public or shared snapshots of encrypted volumes are not supported, because other accounts would be able to decrypt your data and needs to be migrated to an unencrypted status before sharing.
  • Existing unencrypted volumes cannot be encrypted directly, but can be migrated by
    • Option 1
      • create an unencrypted snapshot from the volume
      • create an encrypted copy of an unencrypted snapshot
      • create an encrypted volume from the encrypted snapshot
    • Option 2
      • create an unencrypted snapshot from the volume
      • create an encrypted volume from an unencrypted snapshot
  • An encrypted snapshot can be created from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.
  • Unencrypted volume cannot be created from an encrypted volume directly but needs to be migrated

EBS Multi-Attach

Refer blog Post @ EBS Multi-Attach

EBS Performance

Refer blog Post @ EBS Performance

EBS vs Instance Store

Refer blog post @ EBS vs Instance Store

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. _____ is a durable, block-level storage volume that you can attach to a single, running Amazon EC2 instance.
    1. Amazon S3
    2. Amazon EBS
    3. None of these
    4. All of these
  2. Which Amazon storage do you think is the best for my database-style applications that frequently encounter many random reads and writes across the dataset?
    1. None of these.
    2. Amazon Instance Storage
    3. Any of these
    4. Amazon EBS
  3. What does Amazon EBS stand for?
    1. Elastic Block Storage
    2. Elastic Business Server
    3. Elastic Blade Server
    4. Elastic Block Store
  4. Which Amazon Storage behaves like raw, unformatted, external block devices that you can attach to your instances?
    1. None of these.
    2. Amazon Instance Storage
    3. Amazon EBS
    4. All of these
  5. A user has created numerous EBS volumes. What is the general limit for each AWS account for the maximum number of EBS volumes that can be created?
    1. 10000
    2. 5000
    3. 100
    4. 1000
  6. Select the correct set of steps for exposing the snapshot only to specific AWS accounts
    1. Select Public for all the accounts and check mark those accounts with whom you want to expose the snapshots and click save.
    2. Select Private and enter the IDs of those AWS accounts, and click Save.
    3. Select Public, enter the IDs of those AWS accounts, and click Save.
    4. Select Public, mark the IDs of those AWS accounts as private, and click Save.
  7. If an Amazon EBS volume is the root device of an instance, can I detach it without stopping the instance?
    1. Yes but only if Windows instance
    2. No
    3. Yes
    4. Yes but only if a Linux instance
  8. Can we attach an EBS volume to more than one EC2 instance at the same time?
    1. Yes
    2. No
    3. Only EC2-optimized EBS volumes.
    4. Only in read mode.
  9. Do the Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance?
    1. Only if instructed to when created
    2. Yes
    3. No
  10. Can I delete a snapshot of the root device of an EBS volume used by a registered AMI?
    1. Only via API
    2. Only via Console
    3. Yes
    4. No
  11. By default, EBS volumes that are created and attached to an instance at launch are deleted when that instance is terminated. You can modify this behavior by changing the value of the flag_____ to false when you launch the instance
    1. DeleteOnTermination
    2. RemoveOnDeletion
    3. RemoveOnTermination
    4. TerminateOnDeletion
  12. Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest? (Choose 3 answers)
    1. Implement third party volume encryption tools
    2. Do nothing as EBS volumes are encrypted by default
    3. Encrypt data inside your applications before storing it on EBS
    4. Encrypt data using native data encryption drivers at the file system level
    5. Implement SSL/TLS for all services running on the server
  13. Which of the following are true regarding encrypted Amazon Elastic Block Store (EBS) volumes? Choose 2 answers
    1. Supported on all Amazon EBS volume types
    2. Snapshots are automatically encrypted
    3. Available to all instance types
    4. Existing volumes can be encrypted
    5. Shared volumes can be encrypted
  14. How can you secure data at rest on an EBS volume?
    1. Encrypt the volume using the S3 server-side encryption service
    2. Attach the volume to an instance using EC2’s SSL interface.
    3. Create an IAM policy that restricts read and write access to the volume.
    4. Write the data randomly instead of sequentially.
    5. Use an encrypted file system on top of the EBS volume
  15. A user has deployed an application on an EBS backed EC2 instance. For a better performance of application, it requires dedicated EC2 to EBS traffic. How can the user achieve this?
    1. Launch the EC2 instance as EBS dedicated with PIOPS EBS
    2. Launch the EC2 instance as EBS enhanced with PIOPS EBS
    3. Launch the EC2 instance as EBS dedicated with PIOPS EBS
    4. Launch the EC2 instance as EBS optimized with PIOPS EBS
  16. A user is trying to launch an EBS backed EC2 instance under free usage. The user wants to achieve encryption of the EBS volume. How can the user encrypt the data at rest?
    1. Use AWS EBS encryption to encrypt the data at rest (Encryption is allowed on micro instances)
    2. User cannot use EBS encryption and has to encrypt the data manually or using a third party tool (Encryption was not allowed on micro instances before)
    3. The user has to select the encryption enabled flag while launching the EC2 instance
    4. Encryption of volume is not available as a part of the free usage tier
  17. A user is planning to schedule a backup for an EBS volume. The user wants security of the snapshot data. How can the user achieve data encryption with a snapshot?
    1. Use encrypted EBS volumes so that the snapshot will be encrypted by AWS
    2. While creating a snapshot select the snapshot with encryption
    3. By default the snapshot is encrypted by AWS
    4. Enable server side encryption for the snapshot using S3
  18. A user has launched an EBS backed EC2 instance. The user has rebooted the instance. Which of the below mentioned statements is not true with respect to the reboot action?
    1. The private and public address remains the same
    2. The Elastic IP remains associated with the instance
    3. The volume is preserved
    4. The instance runs on a new host computer
  19. A user has launched an EBS backed EC2 instance. What will be the difference while performing the restart or stop/start options on that instance?
    1. For restart it does not charge for an extra hour, while every stop/start it will be charged as a separate hour
    2. Every restart is charged by AWS as a separate hour, while multiple start/stop actions during a single hour will be counted as a single hour
    3. For every restart or start/stop it will be charged as a separate hour
    4. For restart it charges extra only once, while for every stop/start it will be charged as a separate hour
  20. A user has launched an EBS backed instance. The user started the instance at 9 AM in the morning. Between 9 AM to 10 AM, the user is testing some script. Thus, he stopped the instance twice and restarted it. In the same hour the user rebooted the instance once. For how many instance hours will AWS charge the user?
    1. 3 hours
    2. 4 hours
    3. 2 hours
    4. 1 hour
  21. You are running a database on an EC2 instance, with the data stored on Elastic Block Store (EBS) for persistence At times throughout the day, you are seeing large variance in the response times of the database queries Looking into the instance with the isolate command you see a lot of wait time on the disk volume that the database’s data is stored on. What two ways can you improve the performance of the database’s storage while maintaining the current persistence of the data? Choose 2 answers
    1. Move to an SSD backed instance
    2. Move the database to an EBS-Optimized Instance
    3. Use Provisioned IOPs EBS
    4. Use the ephemeral storage on an m2.4xLarge Instance Instead
  22. An organization wants to move to Cloud. They are looking for a secure encrypted database storage option. Which of the below mentioned AWS functionalities helps them to achieve this?
    1. AWS MFA with EBS
    2. AWS EBS encryption
    3. Multi-tier encryption with Redshift
    4. AWS S3 server-side storage
  23. A user has stored data on an encrypted EBS volume. The user wants to share the data with his friend’s AWS account. How can user achieve this?
    1. Create an AMI from the volume and share the AMI
    2. Copy the data to an unencrypted volume and then share
    3. Take a snapshot and share the snapshot with a friend
    4. If both the accounts are using the same encryption key then the user can share the volume directly
  24. A user is using an EBS backed instance. Which of the below mentioned statements is true?
    1. The user will be charged for volume and instance only when the instance is running
    2. The user will be charged for the volume even if the instance is stopped
    3. The user will be charged only for the instance running cost
    4. The user will not be charged for the volume if the instance is stopped
  25. A user is planning to use EBS for his DB requirement. The user already has an EC2 instance running in the VPC private subnet. How can the user attach the EBS volume to a running instance?
    1. The user must create EBS within the same VPC and then attach it to a running instance.
    2. The user can create EBS in the same zone as the subnet of instance and attach that EBS to instance. (Should be in the same AZ)
    3. It is not possible to attach an EBS to an instance running in VPC until the instance is stopped.
    4. The user can specify the same subnet while creating EBS and then attach it to a running instance.
  26. A user is creating an EBS volume. He asks for your advice. Which advice mentioned below should you not give to the user for creating an EBS volume?
    1. Take the snapshot of the volume when the instance is stopped
    2. Stripe multiple volumes attached to the same instance
    3. Create an AMI from the attached volume (AMI is created from the snapshot)
    4. Attach multiple volumes to the same instance
  27. An EC2 instance has one additional EBS volume attached to it. How can a user attach the same volume to another running instance in the same AZ?
    1. Terminate the first instance and only then attach to the new instance
    2. Attach the volume as read only to the second instance
    3. Detach the volume first and attach to new instance
    4. No need to detach. Just select the volume and attach it to the new instance, it will take care of mapping internally
  28. What is the scope of an EBS volume?
    1. VPC
    2. Region
    3. Placement Group
    4. Availability Zone

Reference

Amazon_EBS

AWS Encrypting Data at Rest – Whitepaper – Certification

Encrypting Data at Rest

  • AWS delivers a secure, scalable cloud computing platform with high availability, offering the flexibility for you to build a wide range of applications
  • AWS allows several options for encrypting data at rest, for additional layer of security, ranging from completely automated AWS encryption solution to manual client-side options
  • Encryption requires 3 things
    • Data to encrypt
    • Encryption keys
    • Cryptographic algorithm method to encrypt the data
  • AWS provides different models for Securing data at rest on the following parameters
    • Encryption method
      • Encryption algorithm selection involves evaluating security, performance, and compliance requirements specific to your application
    • Key Management Infrastructure (KMI)
      • KMI enables managing & protecting the encryption keys from unauthorized access
      • KMI provides
        • Storage layer that protects plain text keys
        • Management layer that authorize key usage
  • Hardware Security Module (HSM)
    • Common way to protect keys in a KMI is using HSM
    • An HSM is a dedicated storage and data processing device that performs cryptographic operations using keys on the device.
    • An HSM typically provides tamper evidence, or resistance, to protect keys from unauthorized use.
    • A software-based authorization layer controls who can administer the HSM and which users or applications can use which keys in the HSM
  • AWS CloudHSM
    • AWS CloudHSM appliance has both physical and logical tamper detection and response mechanisms that trigger zeroization of the appliance.
    • Zeroization erases the HSM’s volatile memory where any keys in the process of being decrypted were stored and destroys the key that encrypts stored objects, effectively causing all keys on the HSM to be inaccessible and unrecoverable.
    • AWS CloudHSM can be used to generate and store key material and can perform encryption and decryption operations,
    • AWS CloudHSM, however, does not perform any key lifecycle management functions (e.g., access control policy, key rotation) and needs a compatible KMI.
    • KMI can be deployed either on-premises or within Amazon EC2 and can communicate to the AWS CloudHSM instance securely over SSL to help protect data and encryption keys.
    • AWS CloudHSM service uses SafeNet Luna appliances, any key management server that supports the SafeNet Luna platform can also be used with AWS CloudHSM
  • AWS Key Management Service (KMS)
    • AWS KMS is a managed encryption service that allows you to provision and use keys to encrypt data in AWS services and your applications.
    • Masters key, after creation, are designed to never be exported from the service.
    • AWS KMS gives you centralized control over who can access your master keys to encrypt and decrypt data, and it gives you the ability to audit this access.
    • Data can be sent into the KMS to be encrypted or decrypted under a specific master key under you account.
    • AWS KMS is natively integrated with other AWS services (for e.g. Amazon EBS, Amazon S3, and Amazon Redshift) and AWS SDKs to simplify encryption of your data within those services or custom applications
    • AWS KMS provides global availability, low latency, and a high level of durability for your keys.

Encryption Models in AWS

Encryption models in AWS depends on the on how you/AWS provides the encryption method and the KMI

  • You control the encryption method and the entire KMI
  • You control the encryption method, AWS provides the storage component of the KMI, and you provide the management layer of the KMI.
  • AWS controls the encryption method and the entire KMI.

Screen Shot 2016-04-08 at 7.39.04 AM

Model A: You control the encryption method and the entire KMI

  • You use your own KMI to generate, store, and manage access to keys as well as control all encryption methods in your applications
  • Proper storage, management, and use of keys to ensure the confidentiality, integrity, and availability of your data is your responsibility
  • AWS has no access to your keys and cannot perform encryption or decryption on your behalf.
  • Amazon S3
    • Encryption of the data is done before the object is sent to AWS S3
    • Encryption of the data can be done using any encryption method and the encrypted data can be uploaded using the PUT request in the Amazon S3 API
    • Key used to encrypt the data needs to be stored securely in your KMI
    • To decrypt this data, the encrypted object can be downloaded from Amazon S3 using the GET request in the Amazon S3 API and then decrypted using the key in your KMI
    • AWS provide Client-side encryption handling, where you can provide your key to the AWS S3 encryption client which will encrypt and decrypt the data on your behalf. However, AWS never has access to the keys or the unencrypted data
    • Screen Shot 2016-04-08 at 6.51.32 PM.png
  • Amazon EBS
    • Amazon Elastic Block Store (Amazon EBS) provides block-level storage volumes for use with Amazon EC2 instances. Amazon EBS volumes are network-attached, and persist independently from the life of an instance.
    • Because Amazon EBS volumes are presented to an instance as a block device, you can leverage most standard encryption tools for file system-level or block-level encryption
    • Block level encryption
      • Block level encryption tools usually operate below the file system layer using kernel space device drivers to perform encryption and decryption of data.
      • These tools are useful when you want all data written to a volume to be encrypted regardless of what directory the data is stored in
    • File System level encryption
      • File system level encryption usually works by stacking an encrypted file system on top of an existing file system.
      • This method is typically used to encrypt a specific directory
    • These solutions require you to provide keys, either manually or from your KMI.
    • Both block-level and file system-level encryption tools can only be used to encrypt data volumes that are not Amazon EBS boot volumes, as they don’t allow you to automatically make a trusted key available to the boot volume at startup
    • There are third party solutions available, which can help encrypt both the boot and data volumes as well as supplying and protecting keys
  • AWS Storage Gateway
    • AWS Storage Gateway is a service connecting an on-premises software appliance with Amazon S3. Data on disk volumes attached to the AWS Storage Gateway will be automatically uploaded to Amazon S3 based on policy
    • Encryption of the source data on the disk volumes can be either done before writing to the disk or using block level encryption on the iSCSI endpoint that AWS Storage Gateway exposes to encrypt all data on the disk volume.
  • Amazon RDS
    • Amazon RDS doesn’t expose the attached disk it uses for data storage, transparent disk encryption using techniques for EBS section cannot be applied.
    • However, individual fields data can be encrypted before the data is written to RDS and decrypted after reading it.

Model B: You control the encryption method, AWS provides the KMI storage component, and you provide the KMI management layer

  • Model B is similar to Model A where the encryption method is managed by you
  • Model B differs in the approach to Model A where the keys are maintained in AWS CloudHSM rather than than the on-premise key storage system
  • Only you have access to the cryptographic partitions within the dedicated HSM to use the keys

Screen Shot 2016-04-10 at 1.01.54 PM.png

Model C: AWS controls the encryption method and the entire KMI

  • AWS provides and manages the server-side encryption of your data, transparently managing the encryption method and the keys.
  • AWS KMS and other services that encrypt your data directly use a method called envelope encryption to provide a balance between performance and security.
  • Envelope Encryption method
    • A master key is defined either by you or AWS
    • A data key (data encryption key) is generated by the AWS service at the time when data encryption is requested
    • Data key is used to encrypt your data.
    • Data key is then encrypted with a key-encrypting key (master key) unique to the service storing your data.
    • Encrypted data key and the encrypted data are then stored by the AWS storage service on your behalf.
  • Master key (key-encrypting keys) used to encrypt data keys are stored and managed separately from the data and the data keys
  • For decryption of the data, the process is reversed. Encrypted data key is decrypted using the key-encrypting key; the data key is then used to decrypt your data
  • Authorized use of encryption keys is done automatically and is securely managed by AWS.
  • Because unauthorized access to those keys could lead to the disclosure of your data, AWS has built systems and processes with strong access controls that minimize the chance of unauthorized access and had these systems verified by third-party audits to achieve security certifications including SOC 1, 2, and 3, PCI-DSS, and FedRAMP.
  • Amazon S3
    • SSE-S3
      • AWS encrypts each object using a unique data key
      • Data key is encrypted with a periodically rotated master key managed by S3
      • Amazon S3 server-side encryption uses 256-bit Advanced Encryption Standard (AES) keys for both object and master keys
    • SSE-KMS
      • Master keys are defined and managed in KMS for your account
      • Object Encryption
        • When an object is uploaded, a request is sent to KMS to create an object key.
        • KMS generates a unique object key and encrypts it using the master key; KMS then returns this encrypted object key along with the plaintext object key to Amazon S3.
        • Amazon S3 web server encrypts your object using the plaintext object key and stores the now encrypted object (with the encrypted object key) and deletes the plaintext object key from memory.
      • Object Decryption
        • To retrieve the encrypted object, Amazon S3 sends the encrypted object key to AWS KMS.
        • AWS KMS decrypts the object key using the correct master key and returns the decrypted (plaintext) object key to S3.
        • Amazon S3 decrypts the encrypted object, with the plaintext object key, and returns it to you.
    • SSE-C
      • Amazon S3 is provided an encryption key, while uploading the object
      • Encryption key is used by Amazon S3 to encrypt your data using AES-256
      • After object encryption, Amazon S3 deletes the encryption key
      • For downloading, you need to provide the same encryption key, which AWS matches, decrypts and returns the object
  • Amazon EBS
    • When Amazon EBS volume is created, you can choose the master key in KMS to be used for encrypting the volume
    • Volume encryption
      • Amazon EC2 server sends an authenticated request to AWS KMS to create a volume key.
      • AWS KMS generates this volume key, encrypts it using the master key, and returns the plaintext volume key and the encrypted volume key to the Amazon EC2 server.
      • Plaintext volume key is stored in memory to encrypt and decrypt all data going to and from your attached EBS volume.
    • Volume decryption
      • When the encrypted volume (or any encrypted snapshots derived
        from that volume) needs to be re-attached to an instance, a call is made to AWS KMS to decrypt the encrypted volume key.
      • AWS KMS decrypts this encrypted volume key with the correct master key and returns the decrypted volume key to Amazon EC2.
  • Amazon Glacier
    • Glacier provide encryption of the data, by default
    • Before it’s written to disk, data is always automatically encrypted using 256-bit AES keys unique to the Amazon Glacier service that are stored in separate systems under AWS control
  • AWS Storage Gateway
    • AWS Storage Gateway transfers your data to AWS over SSL
    • AWS Storage Gateway stores data encrypted at rest in Amazon S3 or Amazon Glacier using their respective server side encryption schemes.
  • Amazon RDS – Oracle
    • Oracle Advanced Security option for Oracle on Amazon RDS can be used to leverage the native Transparent Data Encryption (TDE) and Native Network Encryption (NNE) features
    • Oracle encryption module creates data and key-encrypting keys to encrypt the database
    • Key-encrypting keys specific to your Oracle instance on Amazon RDS are themselves encrypted by a periodically rotated 256-bit AES master key.
    • Master key is unique to the Amazon RDS service and is stored in separate systems under AWS control
  • Amazon RDS -SQL server
    • Transparent Data Encryption (TDE) can be provisioned for Microsoft SQL Server on Amazon RDS.
    • SQL Server encryption module creates data and keyencrypting keys to encrypt the database.
    • Key-encrypting keys specific to your SQL Server instance on Amazon RDS are themselves encrypted by a periodically rotated, regional 256-bit AES master key
    • Master key is unique to the Amazon RDS service and is stored in separate systems under AWS control

Sample Exam Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. How can you secure data at rest on an EBS volume?
    1. Encrypt the volume using the S3 server-side encryption service
    2. Attach the volume to an instance using EC2’s SSL interface.
    3. Create an IAM policy that restricts read and write access to the volume.
    4. Write the data randomly instead of sequentially.
    5. Use an encrypted file system on top of the EBS volume
  2. Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest? (Choose 3 answers)
    1. Implement third party volume encryption tools
    2. Do nothing as EBS volumes are encrypted by default
    3. Encrypt data inside your applications before storing it on EBS
    4. Encrypt data using native data encryption drivers at the file system level
    5. Implement SSL/TLS for all services running on the server
  3. A company is storing data on Amazon Simple Storage Service (S3). The company’s security policy mandates that data is encrypted at rest. Which of the following methods can achieve this? Choose 3 answers
    1. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys
    2. Use Amazon S3 server-side encryption with customer-provided keys
    3. Use Amazon S3 server-side encryption with EC2 key pair.
    4. Use Amazon S3 bucket policies to restrict access to the data at rest.
    5. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key
    6. Use SSL to encrypt the data while in transit to Amazon S3.
  4. Which 2 services provide native encryption
    1. Amazon EBS
    2. Amazon Glacier
    3. Amazon Redshift (is optional)
    4. Amazon RDS (is optional)
    5. Amazon Storage Gateway
  5. With which AWS services CloudHSM can be used (select 2)
    1. S3
    2. DynamoDb
    3. RDS
    4. ElastiCache
    5. Amazon Redshift

References

AWS S3 Data Protection

AWS S3 Data Protection

  • S3 provides S3 data protection using highly durable storage infrastructure designed for mission-critical and primary data storage.
  • Objects are redundantly stored on multiple devices across multiple facilities in an S3 region.
  • S3 PUT and PUT Object copy operations synchronously store the data across multiple facilities before returning SUCCESS.
  • Once the objects are stored, S3 maintains its durability by quickly detecting and repairing any lost redundancy.
  • S3 also regularly verifies the integrity of data stored using checksums. If  S3 detects data corruption, it is repaired using redundant data.
  • In addition, S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data
  • Data protection against accidental overwrites and deletions can be added by enabling Versioning to preserve, retrieve and restore every version of the object stored
  • S3 also provides the ability to protect data in transit (as it travels to and from S3) and at rest (while it is stored in S3)

S3 Encryption

Refer blog post @ S3 Encryption

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A customer is leveraging Amazon Simple Storage Service in eu-west-1 to store static content for a web-based property. The customer is storing objects using the Standard Storage class. Where are the customers objects replicated?
    1. A single facility in eu-west-1 and a single facility in eu-central-1
    2. A single facility in eu-west-1 and a single facility in us-east-1
    3. Multiple facilities in eu-west-1
    4. A single facility in eu-west-1
  2. A system admin is planning to encrypt all objects being uploaded to S3 from an application. The system admin does not want to implement his own encryption algorithm; instead he is planning to use server side encryption by supplying his own key (SSE-C). Which parameter is not required while making a call for SSE-C?
    1. x-amz-server-side-encryption-customer-key-AES-256
    2. x-amz-server-side-encryption-customer-key
    3. x-amz-server-side-encryption-customer-algorithm
    4. x-amz-server-side-encryption-customer-key-MD5

References

AWS_S3_Security