AWS Database Services Cheat Sheet

AWS Database Services Cheat Sheet

AWS Database Services

Relational Database Service – RDS

  • provides Relational Database service
  • supports MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL Server, and the new, MySQL-compatible Amazon Aurora DB engine
  • as it is a managed service, shell (root ssh) access is not provided
  • manages backups, software patching, automatic failure detection, and recovery
  • supports use initiated manual backups and snapshots
  • daily automated backups with database transaction logs enables Point in Time recovery up to the last five minutes of database usage
  • snapshots are user-initiated storage volume snapshot of DB instance, backing up the entire DB instance and not just individual databases that can be restored as a independent RDS instance
  • RDS Security
    • support encryption at rest using KMS as well as encryption in transit using SSL endpoints
    • supports IAM database authentication, which prevents the need to store static user credentials in the database, because authentication is managed externally using IAM.
    • supports Encryption only during creation of an RDS DB instance
    • existing unencrypted DB cannot be encrypted and you need to create a  snapshot, created a encrypted copy of the snapshot and restore as encrypted DB
    • supports Secret Manager for storing and rotating secrets
    • for encrypted database
      • logs, snapshots, backups, read replicas are all encrypted as well
      • cross region replicas and snapshots does not work across region (Note – this is possible now with latest AWS enhancement)
  • Multi-AZ deployment
    • provides high availability and automatic failover support and is NOT a scaling solution
    • maintains a synchronous standby replica in a different AZ
    • transaction success is returned only if the commit is successful both on the primary and the standby DB
    • Oracle, PostgreSQL, MySQL, and MariaDB DB instances use Amazon technology, while SQL Server DB instances use SQL Server Mirroring
    • snapshots and backups are taken from standby & eliminate I/O freezes
    • during automatic failover, its seamless and RDS switches to the standby instance and updates the DNS record to point to standby
    • failover can be forced with the Reboot with failover option
  • Read Replicas
    • uses the PostgreSQL, MySQL, and MariaDB DB engines’ built-in replication functionality to create a separate Read Only instance
    • updates are asynchronously copied to the Read Replica, and data might be stale
    • can help scale applications and reduce read only load
    • requires automatic backups enabled
    • replicates all databases in the source DB instance
    • for disaster recovery, can be promoted to a full fledged database
    • can be created in a different region for disaster recovery, migration and low latency across regions
    • can’t create encrypted read replicas from unencrypted DB or read replica
  • RDS does not support all the features of underlying databases, and if required the database instance can be launched on an EC2 instance
  • RDS Components
    • DB parameter groups contains engine configuration values that can be applied to one or more DB instances of the same instance type for e.g. SSL, max connections etc.
    • Default DB parameter group cannot be modified, create a custom one and attach to the DB
    • Supports static and dynamic parameters
      • changes to dynamic parameters are applied immediately (irrespective of apply immediately setting)
      • changes to static parameters are NOT applied immediately and require a manual reboot.
  • RDS Monitoring & Notification
    • integrates with CloudWatch and CloudTrail
    • CloudWatch provides metrics about CPU utilization from the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance
    • Performance Insights is a database performance tuning and monitoring feature that helps illustrate the database’s performance and help analyze any issues that affect it
    • supports RDS Event Notification which uses the SNS to provide notification when an RDS event like creation, deletion or snapshot creation etc occurs

Aurora

  • is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases
  • is a managed services and handles time-consuming tasks such as provisioning, patching, backup, recovery, failure detection and repair
  • is a proprietary technology from AWS (not open sourced)
  • provides PostgreSQL and MySQL compatibility
  • is “AWS cloud optimized” and claims 5x performance improvement
    over MySQL on RDS, over 3x the performance of PostgreSQL on RDS
  • scales storage automatically in increments of 10GB, up to 64 TB with no impact to database performance. Storage is striped across 100s of volumes.
  • no need to provision storage in advance.
  • provides self-healing storage. Data blocks and disks are continuously scanned for errors and repaired automatically.
  • provides instantaneous failover
  • replicates each chunk of my the database volume six ways across three Availability Zones i.e. 6 copies of the data across 3 AZ
    • requires 4 copies out of 6 needed for writes
    • requires 3 copies out of 6 need for reads
  • costs more than RDS (20% more) – but is more efficient
  • Read Replicas
    • can have 15 replicas while MySQL has 5, and the replication process is faster (sub 10 ms replica lag)
    • share the same data volume as the primary instance in the same AWS Region, there is virtually no replication lag
    • supports Automated failover for master in less than 30 seconds
    • supports Cross Region Replication using either physical or logical replication.
  • Security
    • supports Encryption at rest using KMS
    • supports Encryption in flight using SSL (same process as MySQL or Postgres)
    • Automated backups, snapshots and replicas are also encrypted
    • Possibility to authenticate using IAM token (same method as RDS)
    • supports protecting the instance with security groups
    • does not support SSH access to the underlying servers
  • Aurora Serverless
    • provides automated database Client  instantiation and on-demand  autoscaling based on actual usage
    • provides a relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads
    • automatically starts up, shuts down, and scales capacity up or down based on the application’s needs. No capacity planning needed
    • Pay per second, can be more cost-effective
  • Aurora Global Database
    • allows a single Aurora database to span multiple AWS regions.
    • provides Physical replication, which uses dedicated infrastructure that leaves the databases entirely available to serve the application
    • supports 1 Primary Region (read / write)
    • replicates across up to 5 secondary (read-only) regions, replication lag is less than 1 second
    • supports up to 16 Read Replicas per secondary region
    • recommended for low-latency global reads and disaster recovery with an RTO of < 1 minute
    • failover is not automated and if the primary region becomes unavailable, a secondary region can be manually removed from an Aurora Global Database and promote it to take full reads and writes. Application needs to be updated to point to the newly promoted region.
  • Aurora Backtrack
    • Backtracking “rewinds” the DB cluster to the specified time
    • Backtracking performs in place restore and does not create a new instance. There is a minimal downtime associated with it.
  • Aurora Clone feature allows quick and cost-effective creation of Aurora Cluster duplicates
  • supports parallel or distributed query using Aurora Parallel Query, which refers to the ability to push down and distribute the computational load of a single query across thousands of CPUs in Aurora’s storage layer.

DynamoDB

  • fully managed NoSQL database service
  • synchronously replicates data across three facilities in an AWS Region, giving high availability and data durability
  • runs exclusively on SSDs to provide high I/O performance
  • provides provisioned table reads and writes
  • automatically partitions, reallocates, and re-partitions the data and provisions additional server capacity as data or throughput changes
  • creates and maintains indexes for the primary key attributes for efficient access to data in the table
  • DynamoDB Table classes currently support
    • DynamoDB Standard table class is the default and is recommended for the vast majority of workloads.
    • DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class which is optimized for tables where storage is the dominant cost.
  • supports Secondary Indexes
    • allows querying attributes other than the primary key attributes without impacting performance.
    • are automatically maintained as sparse objects
  • Local secondary index vs Global secondary index
    • shares partition key + different sort key vs different partition + sort key
    • search limited to partition vs across all partition
    • unique attributes vs non-unique attributes
    • linked to the base table vs independent separate index
    • only created during the base table creation vs can be created later
    • cannot be deleted after creation vs can be deleted
    • consumes provisioned throughput capacity of the base table vs independent throughput
    • returns all attributes for item vs only projected attributes
    • Eventually or Strongly vs Only Eventually consistent reads
    • size limited to 10Gb per partition vs unlimited
  • DynamoDB Consistency
    • provides Eventually consistent (by default) or Strongly Consistent option to be specified during a read operation
    • supports Strongly consistent reads for a few operations like Query, GetItem, and BatchGetItem using the ConsistentRead parameter
  • DynamoDB Throughput Capacity
    • supports On-demand and Provisioned read/write capacity modes
    • Provisioned mode requires the number of reads and writes per second as required by the application to be specified
    • On-demand mode provides flexible billing option capable of serving thousands of requests per second without capacity planning
  • DynamoDB Auto Scaling helps dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
  • DynamoDB Adaptive capacity is a feature that enables DynamoDB to run imbalanced workloads indefinitely.
  • DynamoDB Global Tables provide multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table
  • DynamoDB Time to Live (TTL)
    • enables a per-item timestamp to determine when an item expiry
    • expired items are deleted from the table without consuming any write throughput.
  • DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second.
  • DynamoDB cross-region replication
    • allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
    • using DynamoDB streams which leverages Kinesis and provides time-ordered sequence of item-level changes and can help for lower RPO, lower RTO disaster recovery
  • DynamoDB Triggers (just like database triggers) are a feature that allows the execution of custom actions based on item-level updates on a table.
  • VPC Gateway Endpoints provide private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.

ElastiCache

  • managed web service that provides in-memory caching to deploy and run Memcached or Redis protocol-compliant cache clusters
  • ElastiCache with Redis,
    • like RDS, supports Multi-AZ, Read Replicas and Snapshots
    • Read Replicas are created across AZ within same region using Redis’s asynchronous replication technology
    • Multi-AZ differs from RDS as there is no standby, but if the primary goes down a Read Replica is promoted as primary
    • Read Replicas cannot span across regions, as RDS supports
    • cannot be scaled out and if scaled up cannot be scaled down
    • allows snapshots for backup and restore
    • AOF can be enabled for recovery scenarios, to recover the data in case the node fails or service crashes. But it does not help in case the underlying hardware fails
    • Enabling Redis Multi-AZ as a Better Approach to Fault Tolerance
  • ElastiCache with Memcached
    • can be scaled up by increasing size and scaled out by adding nodes
    • nodes can span across multiple AZs within the same region
    • cached data is spread across the nodes, and a node failure will always result in some data loss from the cluster
    • supports auto discovery
    • every node should be homogenous and of same instance type
  • ElastiCache Redis vs Memcached
    • complex data objects vs simple key value storage
    • persistent vs non persistent, pure caching
    • automatic failover with Multi-AZ vs Multi-AZ not supported
    • scaling using Read Replicas vs using multiple nodes
    • backup & restore supported vs not supported
  • can be used state management to keep the web application stateless

Redshift

  • fully managed, fast and powerful, petabyte scale data warehouse service
  • uses replication and continuous backups to enhance availability and improve data durability and can automatically recover from node and component failures
  • provides Massive Parallel Processing (MPP) by distributing & parallelizing queries across multiple physical resources
  • columnar data storage improving query performance and allowing advance compression techniques
  • only supports Single-AZ deployments and the nodes are available within the same AZ, if the AZ supports Redshift clusters
  • spot instances are NOT an option

AWS DynamoDB

AWS DynamoDB

  • Amazon DynamoDB is a fully managed NoSQL database service that
    • makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.
    • provides fast and predictable performance with seamless scalability
  • DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS, without having to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
  • DynamoDB tables do not have fixed schemas, and the table consists of items and each item may have a different number of attributes.
  • DynamoDB synchronously replicates data across three facilities in an AWS Region, giving high availability and data durability.
  • DynamoDB supports fast in-place updates. A numeric attribute can be incremented or decremented in a row using a single API call.
  • DynamoDB uses proven cryptographic methods to securely authenticate users and prevent unauthorized data access.
  • Durability, performance, reliability, and security are built in, with SSD (solid state drive) storage and automatic 3-way replication.
  • DynamoDB supports two different kinds of primary keys:
    • Partition Key (previously called the Hash key)
      • A simple primary key, composed of one attribute
      • The partition key value is used as input to an internal hash function; the output from the hash function determines the partition where the item will be stored.
      • No two items in a table can have the same partition key value.
    • Partition Key and Sort Key (previously called the Hash and Range key)
      • A composite primary key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key.
      • The partition key value is used as input to an internal hash function; the output from the hash function determines the partition where the item will be stored.
      • All items with the same partition key are stored together, in sorted order by sort key value.
      • The combination of the partition key and sort key must be unique.
      • It is possible for two items to have the same partition key value, but those two items must have different sort key values.
  • DynamoDB Table classes currently support
    • DynamoDB Standard table class is the default and is recommended for the vast majority of workloads.
    • DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class which is optimized for tables where storage is the dominant cost.
  • DynamoDB Throughput Capacity determines the read/write capacity for processing reads and writes on the tables and it currently supports
    • Provisioned – maximum amount of capacity in terms of reads/writes per second that an application can consume from a table or index
    • On-demand – serves thousands of requests per second without capacity planning.
  • DynamoDB Secondary indexes
    • add flexibility to the queries, without impacting performance.
    • are automatically maintained as sparse objects, items will only appear in an index if they exist in the table on which the index is defined making queries against an index very efficient
  • DynamoDB throughput and single-digit millisecond latency make it a great fit for gaming, ad tech, mobile, and many other applications
  • ElastiCache or DAX can be used in front of DynamoDB in order to offload a high amount of reads for non-frequently changed data

DynamoDB Consistency

  • Each DynamoDB table is automatically stored in the three geographically distributed locations for durability.
  • Read consistency represents the manner and timing in which the successful write or update of a data item is reflected in a subsequent read operation of that same item.
  • DynamoDB allows the user to specify whether the read should be eventually consistent or strongly consistent at the time of the request
    • Eventually Consistent Reads (Default)
      • Eventual consistency option maximizes the read throughput.
      • Consistency across all copies is usually reached within a second
      • However, an eventually consistent read might not reflect the results of a recently completed write.
      • Repeating a read after a short time should return the updated data.
      • DynamoDB uses eventually consistent reads, by default.
    • Strongly Consistent Reads
      • Strongly consistent read returns a result that reflects all writes that received a successful response prior to the read
      • Strongly consistent reads are 2x the cost of Eventually consistent reads
      • Strongly Consistent Reads come with disadvantages
        • A strongly consistent read might not be available if there is a network delay or outage. In this case, DynamoDB may return a server error (HTTP 500).
        • Strongly consistent reads may have higher latency than eventually consistent reads.
        • Strongly consistent reads are not supported on global secondary indexes.
        • Strongly consistent reads use more throughput capacity than eventually consistent reads.
  • Read operations (such as GetItemQuery, and Scan) provide a ConsistentRead parameter, if set to true, DynamoDB uses strongly consistent reads during the operation.
  • Query, GetItem, and BatchGetItem operations perform eventually consistent reads by default.
    • Query and GetItem operations can be forced to be strongly consistent
    • Query operations cannot perform strongly consistent reads on Global Secondary Indexes
    • BatchGetItem operations can be forced to be strongly consistent on a per-table basis

DynamoDB Throughput Capacity

  • DynamoDB throughput capacity depends on the read/write capacity modes for processing reads and writes on the tables.
  • DynamoDB supports two types of read/write capacity modes:
    • Provisioned – maximum amount of capacity in terms of reads/writes per second that an application can consume from a table or index
    • On-demand – serves thousands of requests per second without capacity planning.
  • DynamoDB Auto Scaling helps dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
  • DynamoDB Adaptive capacity is a feature that enables DynamoDB to run imbalanced workloads indefinitely.

DynamoDB Secondary Indexes

  • DynamoDB Secondary indexes
    • add flexibility to the queries, without impacting performance.
    • are automatically maintained as sparse objects, items will only appear in an index if they exist in the table on which the index is defined making queries against an index very efficient
  • DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
  • DynamoDB Secondary indexes support two types
    • Global secondary index – an index with a partition key and a sort key that can be different from those on the base table.
    • Local secondary index – an index that has the same partition key as the base table, but a different sort key.

DynamoDB Secondary Indexes - GSI vs LSI

DynamoDB Advanced Topics

  • DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
  • DynamoDB Time to Live – TTL enables a per-item timestamp to determine when an item is no longer needed.
  • DynamoDB cross-region replication allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
  • DynamoDB Global Tables is a new multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table.
  • DynamoDB Triggers (just like database triggers) are a feature that allows the execution of custom actions based on item-level updates on a table.
  • DynamoDB Accelerator – DAX is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from ms to µs – even at millions of requests per second.
  • VPC Gateway Endpoints provide private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.

DynamoDB Performance

  • Automatically scales horizontally
  • runs exclusively on Solid State Drives (SSDs).
    • SSDs help achieve the design goals of predictable low-latency response times for storing and accessing data at any scale.
    • SSDs High I/O performance enables them to serve high-scale request workloads cost-efficiently and to pass this efficiency along in low request pricing.
  • allows provisioned table reads and writes
    • Scale up throughput when needed
    • Scale down throughput four times per UTC calendar day
  • automatically partitions, reallocates and re-partitions the data and provisions additional server capacity as the
    • table size grows or
    • provisioned throughput is increased
  • Global Secondary indexes (GSI)
    • can be created upfront or added later

DynamoDB Security

  • AWS handles basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery.
  • DynamoDB protects user data stored at rest and in transit between on-premises clients and DynamoDB, and between DynamoDB and other AWS resources within the same AWS Region.
  • Encryption at rest is enabled on all DynamoDB table data and cannot be disabled.
  • Encryption at rest includes the base tables, primary key, local and global secondary indexes, streams, global tables, backups, and DynamoDB Accelerator (DAX) clusters.
  • Fine-Grained Access Control (FGAC) gives a high degree of control over data in the table and helps control who (caller) can access which items or attributes of the table and perform what actions (read/write capability).
  • VPC Endpoints allow private connectivity from within a VPC only to DynamoDB.

Refer blog post @ DynamoDB Security

DynamoDB Costs

  • Index Storage
    • DynamoDB is an indexed data store
      • Billable Data = Raw byte data size + 100 byte per-item storage indexing overhead
  • Provisioned throughput
    • Pay flat, hourly rate based on the capacity reserved as the throughput provisioned for the table
    • one Write Capacity Unit provides one write per second for items < 1KB in size.
    • one Read Capacity Unit provides one strongly consistent read (or two eventually consistent reads) per second for items < 4KB in size.
    • Provisioned throughput charges for every 10 units of Write Capacity and every 50 units of Read Capacity.
  • Reserved capacity
    • Significant savings over the normal price
    • Pay a one-time upfront fee
  • DynamoDB also charges for storage, backup, replication, streams, caching, data transfer out.

DynamoDB Best Practices

Refer blog post @ DynamoDB Best Practices

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following are use cases for Amazon DynamoDB? Choose 3 answers
    1. Storing BLOB data.
    2. Managing web sessions
    3. Storing JSON documents
    4. Storing metadata for Amazon S3 objects
    5. Running relational joins and complex updates.
    6. Storing large amounts of infrequently accessed data.
  2. You are configuring your company’s application to use Auto Scaling and need to move user state information. Which of the following AWS services provides a shared data store with durability and low latency?
    1. AWS ElastiCache Memcached (does not allow writes)
    2. Amazon Simple Storage Service (does not provide low latency)
    3. Amazon EC2 instance storage (not durable)
    4. Amazon DynamoDB
  3. Does Dynamo DB support in-place atomic updates?
    1. It is not defined
    2. No
    3. Yes
    4. It does support in-place non-atomic updates
  4. What is the maximum write throughput I can provision for a single Dynamic DB table?
    1. 1,000 write capacity units
    2. 100,000 write capacity units
    3. Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first
    4. 10,000 write capacity units
  5. For a DynamoDB table, what happens if the application performs more reads or writes than your provisioned capacity?
    1. Nothing
    2. requests above the provisioned capacity will be performed but you will receive 400 error codes.
    3. requests above the provisioned capacity will be performed but you will receive 200 error codes.
    4. requests above the provisioned capacity will be throttled and you will receive 400 error codes.
  6. In which of the following situations might you benefit from using DynamoDB? (Choose 2 answers)
    1. You need fully managed database to handle highly complex queries
    2. You need to deal with massive amount of “hot” data and require very low latency
    3. You need a rapid ingestion of clickstream in order to collect data about user behavior
    4. Your on-premises data center runs Oracle database, and you need to host a backup in AWS cloud
  7. You are designing a file-sharing service. This service will have millions of files in it. Revenue for the service will come from fees based on how much storage a user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private. How do you achieve all of these goals in a way that is economical and can scale to millions of users? [PROFESSIONAL]
    1. Store all files in Amazon Simple Storage Service (S3). Create a bucket for each user. Store metadata in the filename of each object, and access it with LIST commands against the S3 API. (expensive and slow as it returns only 1000 items at a time)
    2. Store all files in Amazon S3. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.
    3. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Use a database running in Amazon Relational Database Service (RDS) to store the metadata.(not economical with volumes)
    4. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded. (not economical with volumes)
  8. A utility company is building an application that stores data coming from more than 10,000 sensors. Each sensor has a unique ID and will send a datapoint (approximately 1KB) every 10 minutes throughout the day. Each datapoint contains the information coming from the sensor as well as a timestamp. This company would like to query information coming from a particular sensor for the past week very rapidly and want to delete all the data that is older than 4 weeks. Using Amazon DynamoDB for its scalability and rapidity, how do you implement this in the most cost effective way? [PROFESSIONAL]
    1. One table, with a primary key that is the sensor ID and a hash key that is the timestamp (Single table impacts performance)
    2. One table, with a primary key that is the concatenation of the sensor ID and timestamp (Single table and concatenation impacts performance)
    3. One table for each week, with a primary key that is the concatenation of the sensor ID and timestamp (Concatenation will cause queries would be slower, if at all)
    4. One table for each week, with a primary key that is the sensor ID and a hash key that is the timestamp (Composite key with Sensor ID and timestamp would help for faster queries)
  9. You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors, which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements? [PROFESSIONAL]
    1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance (RDS instance will not support data for 2 years)
    2. Ingest data into a DynamoDB table and move old data to a Redshift cluster (Handle 10K IOPS ingestion and store data into Redshift for analysis)
    3. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage (Does not handle the ingestion issue)
    4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS (RDS instance will not support data for 2 years)
  10. Does Amazon DynamoDB support both increment and decrement atomic operations?
    1. No, neither increment nor decrement operations.
    2. Only increment, since decrement are inherently impossible with DynamoDB’s data model.
    3. Only decrement, since increment are inherently impossible with DynamoDB’s data model.
    4. Yes, both increment and decrement operations.
  11. What is the data model of DynamoDB?
    1. “Items”, with Keys and one or more Attribute; and “Attribute”, with Name and Value.
    2. “Database”, which is a set of “Tables”, which is a set of “Items”, which is a set of “Attributes”.
    3. “Table”, a collection of Items; “Items”, with Keys and one or more Attribute; and “Attribute”, with Name and Value.
    4. “Database”, a collection of Tables; “Tables”, with Keys and one or more Attribute; and “Attribute”, with Name and Value.
  12. In regard to DynamoDB, for which one of the following parameters does Amazon not charge you?
    1. Cost per provisioned write units
    2. Cost per provisioned read units
    3. Storage cost
    4. I/O usage within the same Region
  13. Which statements about DynamoDB are true? Choose 2 answers.
    1. DynamoDB uses a pessimistic locking model
    2. DynamoDB uses optimistic concurrency control
    3. DynamoDB uses conditional writes for consistency
    4. DynamoDB restricts item access during reads
    5. DynamoDB restricts item access during writes
  14. Which of the following is an example of a good DynamoDB hash key schema for provisioned throughput efficiency?
    1. User ID, where the application has many different users.
    2. Status Code where most status codes is the same.
    3. Device ID, where one is by far more popular than all the others.
    4. Game Type, where there are three possible game types.
  15. You are inserting 1000 new items every second in a DynamoDB table. Once an hour these items are analyzed and then are no longer needed. You need to minimize provisioned throughput, storage, and API calls. Given these requirements, what is the most efficient way to manage these Items after the analysis?
    1. Retain the items in a single table
    2. Delete items individually over a 24 hour period
    3. Delete the table and create a new table per hour
    4. Create a new table per hour
  16. When using a large Scan operation in DynamoDB, what technique can be used to minimize the impact of a scan on a table’s provisioned throughput?
    1. Set a smaller page size for the scan (Refer link)
    2. Use parallel scans
    3. Define a range index on the table
    4. Prewarm the table by updating all items
  17. In regard to DynamoDB, which of the following statements is correct?
    1. An Item should have at least two value sets, a primary key and another attribute.
    2. An Item can have more than one attributes
    3. A primary key should be single-valued.
    4. An attribute can have one or several other attributes.
  18. Which one of the following statements is NOT an advantage of DynamoDB being built on Solid State Drives?
    1. serve high-scale request workloads
    2. low request pricing
    3. high I/O performance of WebApp on EC2 instance (Not related to DynamoDB)
    4. low-latency response times
  19. Which one of the following operations is NOT a DynamoDB operation?
    1. BatchWriteItem
    2. DescribeTable
    3. BatchGetItem
    4. BatchDeleteItem (DeleteItem deletes a single item in a table by primary key, but BatchDeleteItem doesn’t exist)
  20. What item operation allows the retrieval of multiple items from a DynamoDB table in a single API call?
    1. GetItem
    2. BatchGetItem
    3. GetMultipleItems
    4. GetItemRange
  21. An application stores payroll information nightly in DynamoDB for a large number of employees across hundreds of offices. Item attributes consist of individual name, office identifier, and cumulative daily hours. Managers run reports for ranges of names working in their office. One query is. “Return all Items in this office for names starting with A through E”. Which table configuration will result in the lowest impact on provisioned throughput for this query? [PROFESSIONAL]
    1. Configure the table to have a hash index on the name attribute, and a range index on the office identifier
    2. Configure the table to have a range index on the name attribute, and a hash index on the office identifier
    3. Configure a hash index on the name attribute and no range index
    4. Configure a hash index on the office Identifier attribute and no range index
  22. You need to migrate 10 million records in one hour into DynamoDB. All records are 1.5KB in size. The data is evenly distributed across the partition key. How many write capacity units should you provision during this batch load?
    1. 6667
    2. 4166
    3. 5556 ( 2 write units (1 for each 1KB) * 10 million/3600 secs, refer link)
    4. 2778
  23. A meteorological system monitors 600 temperature gauges, obtaining temperature samples every minute and saving each sample to a DynamoDB table. Each sample involves writing 1K of data and the writes are evenly distributed over time. How much write throughput is required for the target table?
    1. 1 write capacity unit
    2. 10 write capacity units ( 1 write unit for 1K * 600 gauges/60 secs)
    3. 60 write capacity units
    4. 600 write capacity units
    5. 3600 write capacity units
  24. You are building a game high score table in DynamoDB. You will store each user’s highest score for each game, with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game. What’s the best DynamoDB key structure?
    1. HighestScore as the hash / only key.
    2. GameID as the hash key, HighestScore as the range key. (hash (partition) key should be the GameID, and there should be a range key for ordering HighestScore. Refer link)
    3. GameID as the hash / only key.
    4. GameID as the range / only key.
  25. You are experiencing performance issues writing to a DynamoDB table. Your system tracks high scores for video games on a marketplace. Your most popular game experiences all of the performance issues. What is the most likely problem?
    1. DynamoDB’s vector clock is out of sync, because of the rapid growth in request for the most popular game.
    2. You selected the Game ID or equivalent identifier as the primary partition key for the table. (Refer link)
    3. Users of the most popular video game each perform more read and write requests than average.
    4. You did not provision enough read or write throughput to the table.
  26. You are writing to a DynamoDB table and receive the following exception:” ProvisionedThroughputExceededException”. Though according to your Cloudwatch metrics for the table, you are not exceeding your provisioned throughput. What could be an explanation for this?
    1. You haven’t provisioned enough DynamoDB storage instances
    2. You’re exceeding your capacity on a particular Range Key
    3. You’re exceeding your capacity on a particular Hash Key (Hash key determines the partition and hence the performance)
    4. You’re exceeding your capacity on a particular Sort Key
    5. You haven’t configured DynamoDB Auto Scaling triggers
  27. Your company sells consumer devices and needs to record the first activation of all sold devices. Devices are not activated until the information is written on a persistent database. Activation data is very important for your company and must be analyzed daily with a MapReduce job. The execution time of the data analysis process must be less than three hours per day. Devices are usually sold evenly during the year, but when a new device model is out, there is a predictable peak in activation’s, that is, for a few days there are 10 times or even 100 times more activation’s than in average day. Which of the following databases and analysis framework would you implement to better optimize costs and performance for this workload? [PROFESSIONAL]
    1. Amazon RDS and Amazon Elastic MapReduce with Spot instances.
    2. Amazon DynamoDB and Amazon Elastic MapReduce with Spot instances.
    3. Amazon RDS and Amazon Elastic MapReduce with Reserved instances.
    4. Amazon DynamoDB and Amazon Elastic MapReduce with Reserved instances

References

AWS DynamoDB Throughput Capacity

AWS DynamoDB Throughput Capacity

  • AWS DynamoDB throughput capacity depends on the read/write capacity modes for processing reads and writes on the tables.
  • DynamoDB supports two types of read/write capacity modes:
    • Provisioned – maximum amount of capacity in terms of reads/writes per second that an application can consume from a table or index
    • On-demand – serves thousands of requests per second without capacity planning.
  • DynamoDB Auto Scaling helps dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
  • DynamoDB Burst capacity provides some flexibility in the per-partition throughput provisioning by providing burst capacity.
  • DynamoDB Adaptive capacity is a feature that enables DynamoDB to run imbalanced workloads indefinitely.

NOTE – Provisioned mode is covered in the AWS Certified Developer – Associate exam (DVA-C01) esp. the calculations. On-demand capacity mode is latest enhancement and does not yet feature in the exams.

Provisioned Mode

  • Provisioned mode requires you to specify the number of reads and writes per second as required by the application
  • Provisioned throughput is the maximum amount of capacity that an application can consume from a table or index
  • If the provisioned throughput capacity on a table or index is exceeded, it is subject to request throttling
  • Provisioned mode provides the following capacity units 
    • Read Capacity Units (RCU)
      • Total number of read capacity units required depends on the item size, and the consistent read model (eventually or strongly)
      • one RCU represents
        • two eventually consistent reads per second, for an item up to 4 KB in size i.e. 8 KB
        • one strongly consistent read per second for an item up to 4 KB in size i.e. 2x cost of eventually consistent reads
        • Transactional read requests require two read capacity units to perform one read per second for items up to 4 KB. i.e. 2x cost of strongly consistent reads
      • DynamoDB must consume additional read capacity units for items greater than 4 KB for e.g. for an 8 KB item size, 2 read capacity units to sustain one strongly consistent read per second, 1 read capacity unit if you choose eventually consistent reads, or 4 read capacity units for a transactional read request would be required
      • Item size is rounded off to 4 KB equivalents for e.g. a 6 KB or a 8 KB item in size would require the same RCU
    • Write Capacity Units (WCU)
      • Total number of write capacity units required depends on the item size only
      • one write per second for an item up to 1 KB in size
      • Transactional write requests require 2 write capacity units to perform one write per second for items up to 1 KB. i.e. 2x cost of general write.
      • DynamoDB must consume additional read capacity units for items greater than 1 KB for an 2 KB item size,  2 write capacity units would be required to sustain one write request per second or 4 write capacity units for a transactional write request
      • Item size is rounded off to 1 KB equivalents for e.g. a 0.5 KB or a 1 KB item would need the same WCU
  • Provisioned capacity mode might be best for use cases where you
    • Have predictable application traffic
    • Run applications whose traffic is consistent or ramps gradually
    • Can forecast capacity requirements to control costs

Provisioned Mode Examples

  • DynamoDB table with provisioned capacity of 10 RCUs and 10 WCUs can support
    • Read throughput
      • Eventual consistency = 4KB * 10 * 2 = 80KB/sec
      • Strong consistency = 4KB * 10 = 40KB/sec
      • Transactional consistency = 4KB * 10 * 1/2 = 20KB/sec
    • Write throughput
      • Eventual and Strong consistency = 10 * 1KB = 10KB/sec
      • Transaction consistency = 10 * 1KB * 1/2 = 5KB/sec
  • Capacity units required for reading and writing 15KB item
    • Read capacity units – 15KB rounded to 4 blocks of 4KB = 4 RCUs
      • Eventual consistency 4 RCUs * 1/2 = 2 RCUs
      • Strong consistency 4 RCUs * 1 = 4 RCUs
      • Transactional consistency 4 RCUs * 2 = 8 RCUs
    • Write capacity units 15KB = 15 WCUs
      • Eventual and Strong consistency 15 WCUs * 1 = 15 WCUs
      • Transactional consistency 15 WCUs * 2 = 30 RCUs

On-demand Mode

  • On-demand mode provides a flexible billing option capable of serving thousands of requests per second without capacity planning.
  • No need to specify the expected read and write throughput.
  • Charged for only the reads and writes that the application performs on the tables in terms of read request units and write request units.
  • Offers pay-per-request pricing for read and write requests so that you pay only for what you use.
  • DynamoDB adapts rapidly to accommodate the changing load.
  • DynamoDB on-demand using Request units which are similar to provisioned capacity Units.
  • On-demand mode does not support reserved capacity.
  • On-demand capacity mode might be best for use cases where you
    • Create new tables with unknown workloads
    • Have unpredictable application traffic
    • Prefer the ease of paying for only what you use

DynamoDB Throttling

  • DynamoDB distributes the data across partitions and the provisioned throughput capacity is distributed equally across these partitions and these are physical partitions and not the logical partitions based on the primary key.
  • Each partition on a DynamoDB table is subject to a hard limit of 1,000 write capacity units and 3,000 read capacity units.
  • DynamoDB would throttle requests
    • If the workload is unevenly distributed across partitions, or if the workload relies on short periods of time with high usage (a burst of read or write activity), the table might be throttled.
    • When data access is imbalanced, a hot partition can receive a higher volume of read and write traffic compared to other partitions leading to throttling errors on that partition.
    • If the write throughput capacity on the GSI is not sufficient it would lead to throttling on a GSI and this would affect the base table.
  • To avoid and handle throttling issues, you can
    • Distribute read and write operations as evenly as possible across your table. A hot partition can degrade the overall performance of your table.
    • Implement a caching solution. If the workload is mostly read access to static data, then query results can be delivered much faster if the data is in a well‑designed cache rather than in a database. DynamoDB Accelerator (DAX) is a caching service that offers fast in‑memory performance for your application. ElastiCache can be used as well.
    • Implement error retries and exponential backoff. Exponential backoff can improve an application’s reliability by using progressively longer waits between retries. If using an AWS SDK, this logic is built‑in.

DynamoDB Burst Capacity

  • DynamoDB provides some flexibility in the per-partition throughput provisioning by providing burst capacity.
  • If partition’s throughput is not fully used, DynamoDB reserves a portion of that unused capacity for later bursts of throughput to handle usage spikes.
  • DynamoDB currently retains up to 5 minutes (300 seconds) of unused read and write capacity.
  • During an occasional burst of read or write activity, these extra capacity units can be consumed quickly – even faster than the per-second provisioned throughput capacity that you’ve defined for your table.
  • DynamoDB can also consume burst capacity for background maintenance and other tasks without prior notice.

DynamoDB Adaptive Capacity

  • DynamoDB Adaptive capacity is a feature that enables DynamoDB to run imbalanced workloads indefinitely.
  • Adaptive capacity enables the application to continue read/write to hot partitions without being throttled, provided that traffic does not exceed the table’s total provisioned capacity or the partition’s maximum capacity.
  • It minimizes throttling due to throughput exceptions.
  • It also helps reduce costs by enabling the provisioning of only the needed throughput capacity.
  • Adaptive capacity is enabled automatically for every DynamoDB table, at no additional cost.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You need to migrate 10 million records in one hour into DynamoDB. All records are 1.5KB in size. The data is evenly distributed across the partition key. How many write capacity units should you provision during this batch load?
    1. 6667
    2. 4166
    3. 5556 ( 2 write units (1 for each 1KB) * 10 million/3600 secs)
    4. 2778
  2. A meteorological system monitors 600 temperature gauges, obtaining temperature samples every minute and saving each sample to a DynamoDB table. Each sample involves writing 1K of data and the writes are evenly distributed over time. How much write throughput is required for the target table?
    1. 1 write capacity unit
    2. 10 write capacity units ( 1 write unit for 1K * 600 gauges/60 secs)
    3. 60 write capacity units
    4. 600 write capacity units
    5. 3600 write capacity units
  3. A company is building a system to collect sensor data from its 36000 trucks, which is stored in DynamoDB. The trucks emit 1KB of data once every hour. How much write throughput is required for the target table. Choose an answer from the options below
    1. 10
    2. 60
    3. 600
    4. 150
  4. A company is using DynamoDB to design storage for their IOT project to store sensor data. Which combination would give the highest throughput?
    1. 5 Eventual Consistent reads capacity with Item Size of 4KB (40KB/s)
    2. 15 Eventual Consistent reads capacity with Item Size of 1KB (30KB/s)
    3. 5 Strongly Consistent reads capacity with Item Size of 4KB (20KB/s)
    4. 15 Strongly Consistent reads capacity with Item Size of 1KB (15KB/s)
  5. If your table item’s size is 3KB and you want to have 90 strongly consistent reads per second, how many read capacity units will you need to provision on the table? Choose the correct answer from the options below
    1. 90
    2. 45
    3. 10
    4. 19

References

AWS EC2 Dedicated Host vs Dedicated Instances

EC2 Dedicated Host vs Dedicated Instances

EC2 Dedicated Host vs Dedicated Instances

  • Each instance launched into a VPC has a tenancy attribute.
    • default
      • is the default option
      • instances run on shared hardware.
      • all instances launched would be shared, unless you explicitly specify a different tenancy during the instance launch.
    • dedicated
      • instance runs on single-tenant hardware.
      • all instances launched would be dedicated
      • can’t be changed to default after creation
    • host
      • instance runs on a Dedicated Host, which is an isolated server with configurations that you can control.
  • default tenancy can’t be changed to dedicatedor hostand vice versa. Changes reflect the next time when the instance starts.
  • dedicatedtenancy can be changed to hostand vice versa only for the stopped instance after launch.
  • Dedicated Hosts and Dedicated Instances can both be used to launch EC2 instances onto physical servers that are dedicated for your use.
  • There are no performance, security, or physical differences between Dedicated Instances and instances on Dedicated Hosts.

Dedicated Host vs Dedicated Instances

EC2 Dedicated Host vs Dedicated Instances

Dedicated Hosts

  • EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use.
  • provides Affinity that allows you to specify which Dedicated Host an instance will run on after it has been stopped and restarted.
  • Dedicated Hosts provide visibility and the option to control how you place your instances on a specific, physical server. This enables you to deploy instances using configurations that help address corporate compliance and regulatory requirements.
  • Dedicated Hosts allow using existing per-socket, per-core, or per-VM software licenses, including Windows Server, Microsoft SQL Server, SUSE, and Linux Enterprise Server.
  • Dedicated Host is also integrated with AWS License Manager, a service that helps you manage your software licenses, including Microsoft Windows Server and Microsoft SQL Server licenses.
  • RDS instances are not supported.
  • Dedicated Hosts cannot be launched in placement groups

Dedicated Instances

  • Dedicated Instances are EC2 instances that run in a VPC on hardware that’s dedicated to a single customer
  • Dedicated Instances are physically isolated at the host hardware level from the instances that aren’t Dedicated Instances and from instances that belong to other AWS accounts.
  • Dedicated Instances can be launched using
    • Create the VPC with the instance tenancy set to dedicated, all instances launched into this VPC are Dedicated Instances even if you mark the tenancy as shared.
    • Create the VPC with the instance tenancy set to default, and specify dedicated tenancy for any instances that should be Dedicated Instances when launched.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants its instances to run on single-tenant hardware with dedicated hardware for compliance reasons. Which value should they have to set the instance’s tenancy attribute to?
    1. Dedicated
    2. Isolated
    3. Default
    4. Reserved
  2. A company is performing migration from on-premises to AWS cloud. They have a compliance requirement for application hosting on physical servers to be able to use existing server-bound software licenses. Which AWS EC2 purchase type would help fulfill the requirement?
    1. Spot instances
    2. Reserved instances
    3. On-demand instances
    4. Dedicated Hosts

References

EC2_Dedicated_Hosts_vs_Dedicated_Instances

AWS DynamoDB Security

DynamoDB Security

  • DynamoDB provides a highly durable storage infrastructure for mission-critical and primary data storage.
  • Data is redundantly stored on multiple devices across multiple facilities in a DynamoDB Region.
  • AWS handles basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery.
  • DynamoDB protects user data stored at rest and in transit between on-premises clients and DynamoDB, and between DynamoDB and other AWS resources within the same AWS Region.
  • Fine-Grained Access Control (FGAC) gives a high degree of control over data in the table.
  • FGAC helps control who (caller) can access which items or attributes of the table and perform what actions (read/write capability).
  • FGAC is integrated with IAM, which manages the security credentials and the associated permissions.
  • VPC Endpoints allow private connectivity from within a VPC only to DynamoDB.

DynamoDB Encryption

  • DynamoDB Security supports both encryption at rest and in transit.

Encryption in Transit

  • DynamoDB Data in Transit encryption can be done by encrypting sensitive data on the client side or using encrypted connections (TLS).
  • DAX supports encryption in transit, ensuring that all requests and responses between the application and the cluster are encrypted by transport level security (TLS), and connections to the cluster can be authenticated by verification of a cluster x509 certificate.
  • All the data in DynamoDB is encrypted in transit
  • communications to and from DynamoDB using the HTTPS protocol, which protects network traffic using SSL/TLS encryption.
  • Data can also be protected using client-side encryption

Encryption at Rest

  • Encryption at rest enables encryption for the data persisted (data at rest) in the DynamoDB tables.
  • Encryption at rest includes the base tables, primary key, local and global secondary indexes, streams, global tables, backups, and DynamoDB Accelerator (DAX) clusters.
  • Encryption at rest is enabled on all DynamoDB table data and cannot be disabled.
  • Encryption at rest automatically integrates with AWS KMS for managing the keys used for encrypting the tables.
  • Encryption at rest also supports the following KMS keys
    • AWS owned CMK – Default encryption type. The key is owned by DynamoDB (no additional charge).
    • AWS managed CMK – the key is stored in your account and is managed by AWS KMS (AWS KMS charges apply).
    • Customer managed CMK – the key is stored in your account and is created, owned, and managed by you. You have full control over the KMS key (AWS KMS charges apply).
  • Encryption at rest can be enabled only for a new table and encryption keys can be switched for an existing table.
  • DynamoDB streams can be used with encrypted tables and are always encrypted with a table-level encryption key.
  • On-Demand Backups of encrypted DynamoDB tables are encrypted using S3’s Server-Side Encryption
  • Encryption at rest encrypts the data using 256-bit AES encryption.
  • DAX clusters cannot use customer-managed key encryption.

DynamoDB Encryption Client

  • DynamoDB Encryption Client is a software library that helps protect the table data before sending it to DynamoDB.
  • Encrypting the sensitive data in transit and at rest helps ensure that the plaintext data isn’t available to any third party, including AWS.
  • helps in end-to-end data encryption.
  • encrypts attribute values that can be controlled but do not encrypt the entire table, attribute names, or primary key.

VPC Endpoints

  • By default, communications to and from DynamoDB use the HTTPS protocol, which protects network traffic by using SSL/TLS encryption.
  • VPC endpoint for DynamoDB enables EC2 instances in the VPC to use their private IP addresses to access DynamoDB with no exposure to the public internet.
  • Traffic between the VPC and the AWS service does not leave the Amazon network.
  • EC2 instances do not require public IP addresses, an internet gateway, a NAT device, or a virtual private gateway in the VPC.
  • VPC Endpoint Policies to control access to DynamoDB.

DynamoDB VPC Endpoint

DynamoDB Security Best Practices

  • DynamoDB encrypts at rest all user data stored in tables, indexes, streams, and backups using encryption keys stored in KMS.
  • DynamoDB can be configured to use an AWS owned key (default encryption type), an AWS managed key, or a customer managed key to encrypt user data.
  • Use IAM Roles to authenticate access to DynamoDB
  • Use VPC endpoint and policies to access DynamoDB
  • DynamoDB Encryption Client is a software library that helps in client-side encryption and protects the table data before you send it to DynamoDB.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What are the services supported by VPC endpoints, using the Gateway endpoint type?
    1. Amazon EFS
    2. Amazon DynamoDB
    3. Amazon Glacier
    4. Amazon SQS

References

AWS_DynamoDB_Security

AWS RDS Proxy

RDS Proxy

AWS RDS Proxy

  • fully managed, highly available database proxy for RDS that makes applications more secure, scalable, more resilient to database failures.
  • allows apps to pool and share DB connections established with the database
  • improves database efficiency by reducing stress on the database resources (e.g. CPU, RAM) by minimizing open connections and creation of new connections.
  • is serverless and scales automatically to accommodate your workload.
  • is highly available and deployed across multiple Availability Zones.
  • increases resiliency to database failures by automatically connecting to a standby DB instance while preserving application connections.
  • reduces RDS and Aurora failover time by up to 66%.
  • protects the database against oversubscription by providing control over the number of database connections that are created.
  • queues or throttles application connections that can’t be served immediately from the pool of connections.
  • supports RDS (MySQL, PostgreSQL, MariaDB) and Aurora
  • is fully managed and there is no need to provision or manage any additional infrastructure.
  • required no code changes for most apps, just need to point to the RDS proxy endpoint instead of the RDS endpoint
  • enforce IAM Authentication for DB, and securely store credentials in AWS Secrets Manager
  • is never publicly accessible (must be accessed from VPC)

RDS Proxy

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company migrated one of its business-critical database workloads to an Amazon Aurora Multi-AZ DB cluster. The company requires a very low RTO and needs to improve the application recovery time after database failover. Which approach meets these requirements?
    1. Set the max_connections parameter to 16,000 in the instance-level parameter group.
    2. Modify the client connection timeout to 300 seconds.
    3. Create an Amazon RDS Proxy database proxy and update client connections to point to the proxy endpoint.
    4. Enable the query cache at the instance level.
  2. A company is running a serverless application on AWS Lambda that stores data in an Amazon RDS for MySQL DB instance. Usage has steadily increased, and recently there have been numerous “too many connections” errors when the Lambda function attempts to connect to the database. The company already has configured the database to use the maximum max_connections value that is possible. What should a SysOps administrator do to resolve these errors?
    1. Create a read replica of the database. Use Amazon Route 53 to create a weighted DNS record that contains both databases.
    2. Use Amazon RDS Proxy to create a proxy. Update the connection string in the Lambda function.
    3. Increase the value in the max_connect_errors parameter in the parameter group that the database uses.
    4. Update the Lambda function’s reserved concurrency to a higher value.

References

Amazon_RDS_Proxy

AWS EC2 Troubleshooting

AWS EC2 Troubleshooting

An Instance Immediately Terminates

  • EBS volume limit was reached. Its a soft limit and can be increased by submitting a support request
  • EBS snapshot is corrupt.
  • Root EBS volume is encrypted and you do not have permission to access the KMS key for decryption.
  • Instance store-backed AMI used to launch the instance is missing a required part
  • Resolution
    • Delete unused volumes
    • Ensure proper permissions to access the AWS keys.

EC2 Instance Connectivity Issues

  • Error connecting to your instance: Connection timed out
    • Route table, for the subnet, does not have a  route that sends all traffic destined outside the VPC to the Internet gateway for the VPC.
    • Security group does not allow inbound traffic from the public IP address on the proper port
    • ACL does not allow inbound traffic from and outbound traffic to the public IP address on the proper port
    • Private key used to connect does not match with key that corresponds to the key pair selected for the instance during the launch
    • Appropriate user name for the AMI is not used for e.g. user name for Amazon Linux AMI is ec2-user, Ubuntu AMI is ubuntu, RHEL5 AMI & SUSE Linux can be either root or ec2-user, Fedora AMI can be fedora or ec2-user
    • If connecting from a corporate network, the internal firewall does not
      allow inbound and outbound traffic on port 22 (for Linux instances) or port 3389 (for Windows instances).
    • Instance does not have the same public IP address, which changes during restarts. Associate an Elastic IP address with the instance
    • CPU load on the instance is high; the server may be overloaded.
  • User key not recognized by the server
    • private key file used to connect has not been converted to the format as required by the server
  • Host key not found, Permission denied (publickey), or Authentication failed, permission denied
    • appropriate user name for the AMI is not used for connecting
    • proper private key file for the instance is not used
  • Unprotected Private Key File
    • private key file is not protected from read and write operations from any other users.
  • Server refused our key or No supported authentication methods available
    • appropriate user name for the AMI is not used for connecting

Failed Status Checks

  • System Status CheckChecks Physical Hosts
    • Lost Network connectivity
    • Loss of System power
    • Software issues on the physical host
    • Hardware issues on the physical host
    • Resolution
      • Amazon EBS-backed AMI instance – stop and restart the instance
      • Instance-store backed AMI – terminate the instance and launch a replacement.
  • Instance Status Check – Checks Instance or VM
    • Possible reasons
      • Misconfigured networking or startup configuration
      • Exhausted memory
      • Corrupted file system
      • Failed Amazon EBS volume or Physical drive
      • Incompatible kernel
    • Resolution
      • Rebooting of the Instance or making modifications in your Operating system, volumes

Instance Capacity Issues

  • InsufficientInstanceCapacity
    • AWS does not currently have enough available capacity to service the request.
    • There is a limit to the number of instances of instance type that can be launched within a region.
    • Issue is mainly from the AWS side and it can be resolved by
      • reducing the request for the number of instances
      • changing the instance type
      • submitting a request without specifying the Availability Zone.
  • InstanceLimitExceeded
    • Concurrent running instance limit, default is 20, has been reached in a region.
    • Request an instance limit increase on a per-region basis

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A user has launched an EC2 instance. The instance got terminated as soon as it was launched. Which of the below mentioned options is not a possible reason for this?
    1. The user account has reached the maximum EC2 instance limit (Refer link)
    2. The snapshot is corrupt
    3. The AMI is missing. It is the required part
    4. The user account has reached the maximum volume limit
  2. If you’re unable to connect via SSH to your EC2 instance, which of the following should you check and possibly correct to restore connectivity?
    1. Adjust Security Group to permit egress traffic over TCP port 443 from your IP.
    2. Configure the IAM role to permit changes to security group settings.
    3. Modify the instance security group to allow ingress of ICMP packets from your IP.
    4. Adjust the instance’s Security Group to permit ingress traffic over port 22 from your IP
    5. Apply the most recently released Operating System security patches.
  3. You try to connect via SSH to a newly created Amazon EC2 instance and get one of the following error messages: “Network error: Connection timed out” or “Error connecting to [instance], reason: -> Connection timed out: connect,” You have confirmed that the network and security group rules are configured correctly and the instance is passing status checks. What steps should you take to identify the source of the behavior? Choose 2 answers
    1. Verify that the private key file corresponds to the Amazon EC2 key pair assigned at launch.
    2. Verify that your IAM user policy has permission to launch Amazon EC2 instances. (there is not need for a IAM user and just need ssh keys)
    3. Verify that you are connecting with the appropriate user name for your AMI. (Although it gives different error seems the only other logical choice)
    4. Verify that the Amazon EC2 Instance was launched with the proper IAM role. (role assigned to EC2 is irrelevant for ssh and only controls what AWS resources EC2 can access to)
    5. Verify that your federation trust to AWS has been established (federation is for authenticating the user)
  4. A user has launched an EBS backed EC2 instance in the us-east-1a region. The user stopped the instance and started it back after 20 days. AWS throws up an ‘Insufficient Instance Capacity’ error. What can be the possible reason for this?
    1. AWS does not have sufficient capacity in that availability zone
    2. AWS zone mapping is changed for that user account
    3. There is some issue with the host capacity on which the instance is launched
    4. The user account has reached the maximum EC2 instance limit
  5. A user is trying to connect to a running EC2 instance using SSH. However, the user gets an Unprotected Private Key File error. Which of the below mentioned options can be a possible reason for rejection?
    1. The private key file has the wrong file permission
    2. The ppk file used for SSH is read only
    3. The public key file has the wrong permission
    4. The user has provided the wrong user name for the OS login
  6. A user has launched an EC2 instance. However, due to some reason the instance was terminated. If the user wants to find out the reason for termination, where can he find the details?
    1. It is not possible to find the details after the instance is terminated
    2. The user can get information from the AWS console, by checking the Instance description under the State transition reason label
    3. The user can get information from the AWS console, by checking the Instance description under the Instance Status Change reason label
    4. The user can get information from the AWS console, by checking the Instance description under the Instance Termination reason label
  7. You have a Linux EC2 web server instance running inside a VPC. The instance is in a public subnet and has an EIP associated with it so you can connect to it over the Internet via HTTP or SSH. The instance was also fully accessible when you last logged in via SSH and was also serving web requests on port 80. Now you are not able to SSH into the host nor does it respond to web requests on port 80, that were working fine last time you checked. You have double-checked that all networking configuration parameters (security groups route tables, IGW, EIP. NACLs etc.) are properly configured and you haven’t made any changes to those anyway since you were last able to reach the Instance). You look at the EC2 console and notice that system status check shows “impaired.” Which should be your next step in troubleshooting and attempting to get the instance back to a healthy state so that you can log in again?
    1. Stop and start the instance so that it will be able to be redeployed on a healthy host system that most likely will fix the “impaired” system status (for system status check impaired status you need Stop Start for EBS and terminate and relaunch for Instance store)
    2. Reboot your instance so that the operating system will have a chance to boot in a clean healthy state that most likely will fix the ‘impaired” system status
    3. Add another dynamic private IP address to me instance and try to connect via that new path, since the networking stack of the OS may be locked up causing the “impaired” system status.
    4. Add another Elastic Network Interface to the instance and try to connect via that new path since the networking stack of the OS may be locked up causing the “impaired” system status
    5. un-map and then re-map the EIP to the instance, since the IGW/NAT gateway may not be working properly, causing the “impaired” system status
  8. A user is trying to connect to a running EC2 instance using SSH. However, the user gets a connection time out error. Which of the below mentioned options is not a possible reason for rejection?
    1. The access key to connect to the instance is wrong (access key is different from ssh private key)
    2. The security group is not configured properly
    3. The private key used to launch the instance is not correct
    4. The instance CPU is heavily loaded
  9. A user is trying to connect to a running EC2 instance using SSH. However, the user gets a Host key not found error. Which of the below mentioned options is a possible reason for rejection?
    1. The user has provided the wrong user name for the OS login
    2. The instance CPU is heavily loaded
    3. The security group is not configured properly
    4. The access key to connect to the instance is wrong (access key is different from ssh private key)

AWS EC2 – Placement Groups

EC2 Placement Groups

  • EC2 Placement groups determine how the instances are placed on the underlying hardware.
  • AWS now provides three types of placement groups
    • Cluster – clusters instances into a low-latency group in a single AZ
    • Partition – spreads instances across logical partitions, ensuring that instances in one partition do not share underlying hardware with instances in other partitions
    • Spread – strictly places a small group of instances across distinct underlying hardware to reduce correlated failures

Cluster Placement Groups

  • is a logical grouping of instances within a single Availability Zone
  • don’t span across Availability Zones
  • can span peered VPCs in the same Region
  • impacts High Availability as susceptible to hardware failures for the application
  • recommended for
    • applications that benefit from low network latency, high network throughput, or both.
    • when the majority of the network traffic is between the instances in the group.
  • To provide the lowest latency, and the highest packet-per-second network performance for the placement group, choose an instance type that supports enhanced networking
  • recommended to launch all group instances with the same instance type at the same time to ensure enough capacity
  • instances can be added later, but there are chances of encountering an insufficient capacity error
  • for moving an instance into the placement group,
    • create an AMI from the existing instance,
    • and then launch a new instance from the AMI into a placement group.
  • an instance still runs in the same placement group if stopped and started within the placement group.
  • in case of a capacity error, stop and start all of the instances in the placement group, and try the launch again. Restarting the instances may migrate them to hardware that has capacity for all requested instances
  • is only available within a single AZ either in the same VPC or peered VPCs
  • is more of a hint to AWS that the instances need to be launched physically close to each together
  • enables applications to participate in a low-latency, 10 Gbps network.

AWS EC2 Placement Group

Partition Placement Groups

  • is a group of instances spread across partitions i.e. group of instances spread across racks.
  • Partitions are logical groupings of instances, where contained instances do not share the same underlying hardware across different partitions.
  • EC2 divides each group into logical segments called partitions.
  • EC2 ensures that each partition within a placement group has its own set of racks. Each rack has its own network and power source.
  • No two partitions within a placement group share the same racks, allowing isolating the impact of a hardware failure within the application.
  • reduces the likelihood of correlated hardware failures for the application.
  • can have partitions in multiple Availability Zones in the same region
  • can have a maximum of seven partitions per Availability Zone
  • number of instances that can be launched into a partition placement group is limited only by the limits of the account.
  • can be used to spread deployment of large distributed and replicated workloads, such as HDFS, HBase, and Cassandra, across distinct hardware.
  • offer visibility into the partitions and the instances to partitions mapping can be seen. This information can be shared with topology-aware applications, such as HDFS, HBase, and Cassandra. These applications use this information to make intelligent data replication decisions for increasing data availability and durability.

Spread Placement Groups

  • is a group of instances that are each placed on distinct underlying hardware i.e. each instance on a distinct rack with each rack having its own network and power source.
  • recommended for applications that have a small number of critical instances that should be kept separate from each other.
  • reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware.
  • provide access to distinct hardware, and are therefore suitable for mixing instance types or launching instances over time.
  • can span multiple Availability Zones in the same region.
  • can have a maximum of seven running instances per AZ per group
  • maximum number of instances = 1 instance per rack * 7 racks * No. of AZs for e.g. in a Region with three AZs, a total of 21 instances in the group (seven per zone) can be launched
  • If the start or launch of an instance in a spread placement group fails cause of insufficient unique hardware to fulfil the request, the request can be tried later as EC2 makes more distinct hardware available over time

Placement Group Rules and Limitations

  • Ensure unique Placement group name within AWS account for the region.
  • Placement groups cannot be merged.
  • Instances cannot span multiple placement groups.
  • Instances with Dedicated Hosts cannot be launched in placement groups.
  • Instances with a tenancy of host cannot be launched in placement groups.
  • Cluster Placement groups
    • can’t span multiple Availability Zones.
    • supported by specific instance types which support 10 Gigabyte network
    • maximum network throughput speed of traffic between two instances in a cluster placement group is limited by the slower of the two instances, so choose the instance type properly.
    • can use up to 10 Gbps for single-flow traffic.
    • Traffic to and from S3 buckets within the same region over the public IP address space or through a VPC endpoint can use all available instance aggregate bandwidth.
    • recommended using the same instance type i.e. homogenous instance types. Although multiple instance types can be launched into a cluster placement group. However, this reduces the likelihood that the required capacity will be available for your launch to succeed.
    • Network traffic to the internet and over an AWS Direct Connect connection to on-premises resources is limited to 5 Gbps.
  • Partition placement groups
    • supports a maximum of seven partitions per Availability Zone
    • Dedicated Instances can have a maximum of two partitions
    • are not supported for Dedicated Hosts
    • are currently only available through the API or AWS CLI.
  • Spread placement groups
    • supports a maximum of seven running instances per Availability Zone for e.g., in a region that has three AZs, then a total of 21 running instances in the group (seven per zone).
    • are not supported for Dedicated Instances or Dedicated Hosts.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What is a cluster placement group?
    • A collection of Auto Scaling groups in the same Region
    • Feature that enables EC2 instances to interact with each other via high bandwidth, low latency connections
    • A collection of Elastic Load Balancers in the same Region or Availability Zone
    • A collection of authorized Cloud Front edge locations for a distribution
  2. In order to optimize performance for a compute cluster that requires low inter-node latency, which feature in the following list should you use?
    • AWS Direct Connect
    • Cluster Placement Groups
    • VPC private subnets
    • EC2 Dedicated Instances
    • Multiple Availability Zones
  3. What is required to achieve gigabit network throughput on EC2? You already selected cluster-compute, 10GB instances with enhanced networking, and your workload is already network-bound, but you are not seeing 10 gigabit speeds.
    1. Enable biplex networking on your servers, so packets are non-blocking in both directions and there’s no switching overhead.
    2. Ensure the instances are in different VPCs so you don’t saturate the Internet Gateway on any one VPC.
    3. Select PIOPS for your drives and mount several, so you can provision sufficient disk throughput
    4. Use a Cluster placement group for your instances so the instances are physically near each other in the same Availability Zone. (You are not guaranteed 10 gigabit performance, except within a placement group. Using placement groups enables applications to participate in a low-latency, 10 Gbps network)
  4. You need the absolute highest possible network performance for a cluster computing application. You already selected homogeneous instance types supporting 10 gigabit enhanced networking, made sure that your workload was network bound, and put the instances in a placement group. What is the last optimization you can make?
    1. Use 9001 MTU instead of 1500 for Jumbo Frames, to raise packet body to packet overhead ratios. (For instances that are collocated inside a placement group, jumbo frames help to achieve the maximum network throughput possible, and they are recommended in this case)
    2. Segregate the instances into different peered VPCs while keeping them all in a placement group, so each one has its own Internet Gateway.
    3. Bake an AMI for the instances and relaunch, so the instances are fresh in the placement group and do not have noisy neighbors
    4. Turn off SYN/ACK on your TCP stack or begin using UDP for higher throughput.

References

EC2_User_Guide – Placement_Groups

AWS EC2 Storage

EC2 Storage Overview

EC2 Storage Options - EBS, S3 & Instance Store

Storage Types

Elastic Block Store – EBS

  • Elastic Block Store – EBS provides highly available, reliable, durable, block-level storage volumes that can be attached to an EC2 instance.
  • persists independently from the running life of an instance.
  • behaves like a raw, unformatted, external block device that can be attached to a single EC2 instance at a time.
  • is recommended for data that requires frequent and granular updates e.g. running a database or filesystem.
  • is Zonal and can be attached to any instance within the same Availability Zone and can be used like any other physical hard drive.
  • is particularly well-suited for use as the primary storage for file systems, databases, or any applications that require fine granular updates and access to raw, unformatted, block-level storage.

Instance Store Storage

  • Instance store provides temporary or Ephemeral block-level storage
  • is located on the disks that are physically attached to the host computer.
  • consists of one or more instance store volumes exposed as block devices.
  • The size of an instance store varies by instance type.
  • Virtual devices for instance store volumes that are ephemeral[0-23], starting the first one as ephemeral0 and so on.
  • While an instance store is dedicated to a particular instance, the disk subsystem is shared among instances on a host computer.
  • is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
  • delivers very high random I/O performance and is a good option for storage with very low latency requirements, but you don’t need the data to persist when the instance terminates or you can take advantage of fault-tolerant architectures.

Amazon EBS vs Instance Store

More detailed @ Comparison of EBS vs Instance Store

Simple Storage Service – S3

More details @ AWS S3

Elastic File Store – EFS

  • Elastic File Store – EFS provides a simple, fully managed, easy-to-set-up, scalable, serverless, and cost-optimized file storage
  • can automatically scale from gigabytes to petabytes of data without needing to provision storage.
  • provides managed NFS (network file system) that can be mounted on and accessed by multiple EC2 in multiple AZs simultaneously.
  • offers highly durable, highly scalable, and highly available.
    • stores data redundantly across multiple AZs in the same region
    • grows and shrinks automatically as files are added and removed, so there is no need to manage storage procurement or provisioning.
  • supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol.
  • provides file system access semantics, such as strong data consistency and file locking.
  • is compatible with all Linux-based AMIs for EC2,  POSIX file system (~Linux) that has a standard file API.
  • is a shared POSIX system for Linux systems and does not work for Windows.
  • offers the ability to encrypt data at rest using KMS and in transit.
  • can be accessed from on-premises using an AWS Direct Connect or AWS VPN connection between the on-premises datacenter and VPC.
  • can be accessed concurrently from servers in the on-premises data center as well as EC2 instances in the VPC.

Block Device Mapping

  • A block device is a storage device that moves data in sequences of bytes or bits (blocks) and supports random access and generally use buffered I/O for e.g. hard disks, CD-ROM etc
  • Block devices can be physically attached to a computer (like an instance store volume) or can be accessed remotely as if it was attached (like an EBS volume)
  • Block device mapping defines the block devices to be attached to an instance, which can either be done while creation of an AMI or when an instance is launched
  • Block device must be mounted on the instance, after being attached to the instance, to be able to be accessed
  • When a block device is detached from an instance, it is unmounted by the operating system and you can no longer access the storage device.
  • Additional Instance store volumes can be attached only when the instance is launched while EBS volumes can be attached to a running instance.
  • Viewing the block device mapping for an instance only shows the EBS volumes and not the instance store volumes. Instance metadata can be used to query the complete block device mapping.

Public Data Sets

  • Amazon Web Services provides a repository of public data sets that can be seamlessly integrated into AWS cloud-based applications.
  • Amazon stores the data sets at no charge to the community and, as with all AWS services, you pay only for the compute and storage you use for your own applications.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. When you view the block device mapping for your instance, you can see only the EBS volumes, not the instance store volumes.
    1. Depends on the instance type
    2. FALSE
    3. Depends on whether you use API call
    4. TRUE
  1. Amazon EC2 provides a repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. What is the monthly charge for using the public data sets?
    1. A 1 time charge of 10$ for all the datasets.
    2. 1$ per dataset per month
    3. 10$ per month for all the datasets
    4. There is no charge for using the public data sets
  1. How many types of block devices does Amazon EC2 support?
    1. 2
    2. 4
    3. 3
    4. 1

References

AWS EC2 EBS Monitoring

EBS Monitoring

AWS support EBS monitoring by automatically providing data, such as  CloudWatch metrics and volume status checks to help monitor EBS volumes

CloudWatch Monitoring

  • CloudWatch metrics are statistical data that you can use to view, analyze, and set alarms on the operational behaviour of the EBS volumes
  • CloudWatch provides the below by default
    • Basic – Data, in 5-minute periods at no charge, which includes data from the root devices volumes for EBS backed instances
    • Detailed – Provisioned IOPS (SSD) volumes send one-minute metrics
  • EBS Metrics
    • VolumeReadBytes & VolumeWriteBytes
      • Provides information on the I/O operations in a specified period of time, in bytes
    • VolumeReadOps & VolumeWriteOps
      • Total number (count) of I/O operations in a specified period of time
    • VolumeTotalReadTime & VolumeTotalWriteTime
      • Total number of seconds spent by all operations that were completed in a specified period of time
    • VolumeIdleTime
      • Total number of seconds, in a specific period, when the volume was idle (no read and write operations)
    • VolumeQueueLength
      • Number of read and write operations, in a specific period, waiting to be completed
    • VolumeThroughputPercentage (Provisioned IOPS (SSD) volumes only)
      • Percentage of I/O operations per second (IOPS) delivered of the total IOPS provisioned
    • VolumeConsumedReadWriteOps (Provisioned IOPS (SSD) volumes only)
      • Total amount of read and write operations (normalized to 256K capacity units) consumed in a specified period of time

Volume Status Checks Monitoring

EC2 EBS Volume Status Check Monitoring

  • Volume status checks are automated tests that run every 5 minutes and return a pass or fail status.
  • Volume check status
    • Ok – all the status checks passed
    • Impaired – if the status checks failed
    • Insufficient-Data – checks are still in progress
    • Warning – the I/O performance of the volume is below expectations
  • When EBS determines the volume’s data is potentially inconsistent, it disables the I/O to the EBS volume from the attached EC2 instance to prevent any data corruption. This leads to the status check to fail and the volume status being impaired. Amazon waits for the I/O to be enabled, giving you an opportunity to perform consistency checks.
  • If the auto disabling of I/O is not needed, it can be overridden by enabling the Auto-Enabled IO flag, which would make the EBS volume auto-available immediately after the impaired status.
  • Events would be fired for notification whenever the I/O for an EBS volume is disabled
  • I/O performance status checks, applicable only for PIOPS (SSD) volumes, compare actual volume performance with the expected volume performance and alert if performing below expectations. Status check is performed every 1 min, however, is collected by CloudWatch every 5 mins.
  • While initializing Provisioned IOPS (SSD) volumes that were restored from snapshots, the performance of the volume may drop below 50 percent of its expected level, which causes the volume to display a warning state in the I/O Performance status check. This is expected and can be ignored.

EC2 EBS Volume Status

Volume Events Monitoring

  • EBS generates events for volume status checks
  • Each event includes a start time that indicates the time at which the event occurred and a duration that indicates how long I/O for the volume was disabled
  • Events description can be Awaiting Action (to enable I/O), IO enabled, IO Auto-Enabled, or whether the status check resulted in Normal, Degraded, Severely Degraded, or stalled status

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A user has configured CloudWatch monitoring on an EBS backed EC2 instance. If the user has not attached any additional device, which of the below mentioned metrics will always show a 0 value?
    1. DiskReadBytes
    2. NetworkIn
    3. NetworkOut
    4. CPUUtilization
  2. What does it mean if you have zero IOPS and a non-empty I/O queue for all EBS volumes attached to a running EC2 instance?
    1. The I/O queue is buffer flushing.
    2. Your EBS disk head(s) is/are seeking magnetic stripes.
    3. The EBS volume is unavailable. (EBS volumes are unavailable when all of the attached volumes perform zero read write IO, with pending IO in the queue Refer link)
    4. You need to re-mount the EBS volume in the OS.
  3. While performing the volume status checks, if the status is insufficient-data, what does it mean?
    1. checks may still be in progress on the volume
    2. check has passed
    3. check has failed

References