AWS DynamoDB

AWS DynamoDB

  • Amazon DynamoDB is a fully managed NoSQL database service that
    • makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.
    • provides fast and predictable performance with seamless scalability
  • DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS, without having to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
  • DynamoDB tables do not have fixed schemas, and the table consists of items and each item may have a different number of attributes.
  • DynamoDB synchronously replicates data across three facilities in an AWS Region, giving high availability and data durability.
  • DynamoDB supports fast in-place updates. A numeric attribute can be incremented or decremented in a row using a single API call.
  • DynamoDB uses proven cryptographic methods to securely authenticate users and prevent unauthorized data access.
  • Durability, performance, reliability, and security are built in, with SSD (solid state drive) storage and automatic 3-way replication.
  • DynamoDB supports two different kinds of primary keys:
    • Partition Key (previously called the Hash key)
      • A simple primary key, composed of one attribute
      • The partition key value is used as input to an internal hash function; the output from the hash function determines the partition where the item will be stored.
      • No two items in a table can have the same partition key value.
    • Partition Key and Sort Key (previously called the Hash and Range key)
      • A composite primary key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key.
      • The partition key value is used as input to an internal hash function; the output from the hash function determines the partition where the item will be stored.
      • All items with the same partition key are stored together, in sorted order by sort key value.
      • The combination of the partition key and sort key must be unique.
      • It is possible for two items to have the same partition key value, but those two items must have different sort key values.
  • DynamoDB Table classes currently support
    • DynamoDB Standard table class is the default and is recommended for the vast majority of workloads.
    • DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class which is optimized for tables where storage is the dominant cost.
  • DynamoDB Throughput Capacity determines the read/write capacity for processing reads and writes on the tables and it currently supports
    • Provisioned – maximum amount of capacity in terms of reads/writes per second that an application can consume from a table or index
    • On-demand – serves thousands of requests per second without capacity planning.
  • DynamoDB Secondary indexes
    • add flexibility to the queries, without impacting performance.
    • are automatically maintained as sparse objects, items will only appear in an index if they exist in the table on which the index is defined making queries against an index very efficient
  • DynamoDB throughput and single-digit millisecond latency make it a great fit for gaming, ad tech, mobile, and many other applications
  • ElastiCache or DAX can be used in front of DynamoDB in order to offload a high amount of reads for non-frequently changed data

DynamoDB Consistency

  • Each DynamoDB table is automatically stored in the three geographically distributed locations for durability.
  • Read consistency represents the manner and timing in which the successful write or update of a data item is reflected in a subsequent read operation of that same item.
  • DynamoDB allows the user to specify whether the read should be eventually consistent or strongly consistent at the time of the request
    • Eventually Consistent Reads (Default)
      • Eventual consistency option maximizes the read throughput.
      • Consistency across all copies is usually reached within a second
      • However, an eventually consistent read might not reflect the results of a recently completed write.
      • Repeating a read after a short time should return the updated data.
      • DynamoDB uses eventually consistent reads, by default.
    • Strongly Consistent Reads
      • Strongly consistent read returns a result that reflects all writes that received a successful response prior to the read
      • Strongly consistent reads are 2x the cost of Eventually consistent reads
      • Strongly Consistent Reads come with disadvantages
        • A strongly consistent read might not be available if there is a network delay or outage. In this case, DynamoDB may return a server error (HTTP 500).
        • Strongly consistent reads may have higher latency than eventually consistent reads.
        • Strongly consistent reads are not supported on global secondary indexes.
        • Strongly consistent reads use more throughput capacity than eventually consistent reads.
  • Read operations (such as GetItemQuery, and Scan) provide a ConsistentRead parameter, if set to true, DynamoDB uses strongly consistent reads during the operation.
  • Query, GetItem, and BatchGetItem operations perform eventually consistent reads by default.
    • Query and GetItem operations can be forced to be strongly consistent
    • Query operations cannot perform strongly consistent reads on Global Secondary Indexes
    • BatchGetItem operations can be forced to be strongly consistent on a per-table basis

DynamoDB Throughput Capacity

  • DynamoDB throughput capacity depends on the read/write capacity modes for processing reads and writes on the tables.
  • DynamoDB supports two types of read/write capacity modes:
    • Provisioned – maximum amount of capacity in terms of reads/writes per second that an application can consume from a table or index
    • On-demand – serves thousands of requests per second without capacity planning.
  • DynamoDB Auto Scaling helps dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
  • DynamoDB Adaptive capacity is a feature that enables DynamoDB to run imbalanced workloads indefinitely.

DynamoDB Secondary Indexes

  • DynamoDB Secondary indexes
    • add flexibility to the queries, without impacting performance.
    • are automatically maintained as sparse objects, items will only appear in an index if they exist in the table on which the index is defined making queries against an index very efficient
  • DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
  • DynamoDB Secondary indexes support two types
    • Global secondary index – an index with a partition key and a sort key that can be different from those on the base table.
    • Local secondary index – an index that has the same partition key as the base table, but a different sort key.

DynamoDB Secondary Indexes - GSI vs LSI

DynamoDB Advanced Topics

  • DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
  • DynamoDB Time to Live – TTL enables a per-item timestamp to determine when an item is no longer needed.
  • DynamoDB cross-region replication allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
  • DynamoDB Global Tables is a new multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table.
  • DynamoDB Triggers (just like database triggers) are a feature that allows the execution of custom actions based on item-level updates on a table.
  • DynamoDB Accelerator – DAX is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from ms to µs – even at millions of requests per second.
  • VPC Gateway Endpoints provide private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.

DynamoDB Performance

  • Automatically scales horizontally
  • runs exclusively on Solid State Drives (SSDs).
    • SSDs help achieve the design goals of predictable low-latency response times for storing and accessing data at any scale.
    • SSDs High I/O performance enables them to serve high-scale request workloads cost-efficiently and to pass this efficiency along in low request pricing.
  • allows provisioned table reads and writes
    • Scale up throughput when needed
    • Scale down throughput four times per UTC calendar day
  • automatically partitions, reallocates and re-partitions the data and provisions additional server capacity as the
    • table size grows or
    • provisioned throughput is increased
  • Global Secondary indexes (GSI)
    • can be created upfront or added later

DynamoDB Security

  • AWS handles basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery.
  • DynamoDB protects user data stored at rest and in transit between on-premises clients and DynamoDB, and between DynamoDB and other AWS resources within the same AWS Region.
  • Encryption at rest is enabled on all DynamoDB table data and cannot be disabled.
  • Encryption at rest includes the base tables, primary key, local and global secondary indexes, streams, global tables, backups, and DynamoDB Accelerator (DAX) clusters.
  • Fine-Grained Access Control (FGAC) gives a high degree of control over data in the table and helps control who (caller) can access which items or attributes of the table and perform what actions (read/write capability).
  • VPC Endpoints allow private connectivity from within a VPC only to DynamoDB.

Refer blog post @ DynamoDB Security

DynamoDB Costs

  • Index Storage
    • DynamoDB is an indexed data store
      • Billable Data = Raw byte data size + 100 byte per-item storage indexing overhead
  • Provisioned throughput
    • Pay flat, hourly rate based on the capacity reserved as the throughput provisioned for the table
    • one Write Capacity Unit provides one write per second for items < 1KB in size.
    • one Read Capacity Unit provides one strongly consistent read (or two eventually consistent reads) per second for items < 4KB in size.
    • Provisioned throughput charges for every 10 units of Write Capacity and every 50 units of Read Capacity.
  • Reserved capacity
    • Significant savings over the normal price
    • Pay a one-time upfront fee
  • DynamoDB also charges for storage, backup, replication, streams, caching, data transfer out.

DynamoDB Best Practices

Refer blog post @ DynamoDB Best Practices

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following are use cases for Amazon DynamoDB? Choose 3 answers
    1. Storing BLOB data.
    2. Managing web sessions
    3. Storing JSON documents
    4. Storing metadata for Amazon S3 objects
    5. Running relational joins and complex updates.
    6. Storing large amounts of infrequently accessed data.
  2. You are configuring your company’s application to use Auto Scaling and need to move user state information. Which of the following AWS services provides a shared data store with durability and low latency?
    1. AWS ElastiCache Memcached (does not allow writes)
    2. Amazon Simple Storage Service (does not provide low latency)
    3. Amazon EC2 instance storage (not durable)
    4. Amazon DynamoDB
  3. Does Dynamo DB support in-place atomic updates?
    1. It is not defined
    2. No
    3. Yes
    4. It does support in-place non-atomic updates
  4. What is the maximum write throughput I can provision for a single Dynamic DB table?
    1. 1,000 write capacity units
    2. 100,000 write capacity units
    3. Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first
    4. 10,000 write capacity units
  5. For a DynamoDB table, what happens if the application performs more reads or writes than your provisioned capacity?
    1. Nothing
    2. requests above the provisioned capacity will be performed but you will receive 400 error codes.
    3. requests above the provisioned capacity will be performed but you will receive 200 error codes.
    4. requests above the provisioned capacity will be throttled and you will receive 400 error codes.
  6. In which of the following situations might you benefit from using DynamoDB? (Choose 2 answers)
    1. You need fully managed database to handle highly complex queries
    2. You need to deal with massive amount of “hot” data and require very low latency
    3. You need a rapid ingestion of clickstream in order to collect data about user behavior
    4. Your on-premises data center runs Oracle database, and you need to host a backup in AWS cloud
  7. You are designing a file-sharing service. This service will have millions of files in it. Revenue for the service will come from fees based on how much storage a user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private. How do you achieve all of these goals in a way that is economical and can scale to millions of users? [PROFESSIONAL]
    1. Store all files in Amazon Simple Storage Service (S3). Create a bucket for each user. Store metadata in the filename of each object, and access it with LIST commands against the S3 API. (expensive and slow as it returns only 1000 items at a time)
    2. Store all files in Amazon S3. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.
    3. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Use a database running in Amazon Relational Database Service (RDS) to store the metadata.(not economical with volumes)
    4. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded. (not economical with volumes)
  8. A utility company is building an application that stores data coming from more than 10,000 sensors. Each sensor has a unique ID and will send a datapoint (approximately 1KB) every 10 minutes throughout the day. Each datapoint contains the information coming from the sensor as well as a timestamp. This company would like to query information coming from a particular sensor for the past week very rapidly and want to delete all the data that is older than 4 weeks. Using Amazon DynamoDB for its scalability and rapidity, how do you implement this in the most cost effective way? [PROFESSIONAL]
    1. One table, with a primary key that is the sensor ID and a hash key that is the timestamp (Single table impacts performance)
    2. One table, with a primary key that is the concatenation of the sensor ID and timestamp (Single table and concatenation impacts performance)
    3. One table for each week, with a primary key that is the concatenation of the sensor ID and timestamp (Concatenation will cause queries would be slower, if at all)
    4. One table for each week, with a primary key that is the sensor ID and a hash key that is the timestamp (Composite key with Sensor ID and timestamp would help for faster queries)
  9. You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors, which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements? [PROFESSIONAL]
    1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance (RDS instance will not support data for 2 years)
    2. Ingest data into a DynamoDB table and move old data to a Redshift cluster (Handle 10K IOPS ingestion and store data into Redshift for analysis)
    3. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage (Does not handle the ingestion issue)
    4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS (RDS instance will not support data for 2 years)
  10. Does Amazon DynamoDB support both increment and decrement atomic operations?
    1. No, neither increment nor decrement operations.
    2. Only increment, since decrement are inherently impossible with DynamoDB’s data model.
    3. Only decrement, since increment are inherently impossible with DynamoDB’s data model.
    4. Yes, both increment and decrement operations.
  11. What is the data model of DynamoDB?
    1. “Items”, with Keys and one or more Attribute; and “Attribute”, with Name and Value.
    2. “Database”, which is a set of “Tables”, which is a set of “Items”, which is a set of “Attributes”.
    3. “Table”, a collection of Items; “Items”, with Keys and one or more Attribute; and “Attribute”, with Name and Value.
    4. “Database”, a collection of Tables; “Tables”, with Keys and one or more Attribute; and “Attribute”, with Name and Value.
  12. In regard to DynamoDB, for which one of the following parameters does Amazon not charge you?
    1. Cost per provisioned write units
    2. Cost per provisioned read units
    3. Storage cost
    4. I/O usage within the same Region
  13. Which statements about DynamoDB are true? Choose 2 answers.
    1. DynamoDB uses a pessimistic locking model
    2. DynamoDB uses optimistic concurrency control
    3. DynamoDB uses conditional writes for consistency
    4. DynamoDB restricts item access during reads
    5. DynamoDB restricts item access during writes
  14. Which of the following is an example of a good DynamoDB hash key schema for provisioned throughput efficiency?
    1. User ID, where the application has many different users.
    2. Status Code where most status codes is the same.
    3. Device ID, where one is by far more popular than all the others.
    4. Game Type, where there are three possible game types.
  15. You are inserting 1000 new items every second in a DynamoDB table. Once an hour these items are analyzed and then are no longer needed. You need to minimize provisioned throughput, storage, and API calls. Given these requirements, what is the most efficient way to manage these Items after the analysis?
    1. Retain the items in a single table
    2. Delete items individually over a 24 hour period
    3. Delete the table and create a new table per hour
    4. Create a new table per hour
  16. When using a large Scan operation in DynamoDB, what technique can be used to minimize the impact of a scan on a table’s provisioned throughput?
    1. Set a smaller page size for the scan (Refer link)
    2. Use parallel scans
    3. Define a range index on the table
    4. Prewarm the table by updating all items
  17. In regard to DynamoDB, which of the following statements is correct?
    1. An Item should have at least two value sets, a primary key and another attribute.
    2. An Item can have more than one attributes
    3. A primary key should be single-valued.
    4. An attribute can have one or several other attributes.
  18. Which one of the following statements is NOT an advantage of DynamoDB being built on Solid State Drives?
    1. serve high-scale request workloads
    2. low request pricing
    3. high I/O performance of WebApp on EC2 instance (Not related to DynamoDB)
    4. low-latency response times
  19. Which one of the following operations is NOT a DynamoDB operation?
    1. BatchWriteItem
    2. DescribeTable
    3. BatchGetItem
    4. BatchDeleteItem (DeleteItem deletes a single item in a table by primary key, but BatchDeleteItem doesn’t exist)
  20. What item operation allows the retrieval of multiple items from a DynamoDB table in a single API call?
    1. GetItem
    2. BatchGetItem
    3. GetMultipleItems
    4. GetItemRange
  21. An application stores payroll information nightly in DynamoDB for a large number of employees across hundreds of offices. Item attributes consist of individual name, office identifier, and cumulative daily hours. Managers run reports for ranges of names working in their office. One query is. “Return all Items in this office for names starting with A through E”. Which table configuration will result in the lowest impact on provisioned throughput for this query? [PROFESSIONAL]
    1. Configure the table to have a hash index on the name attribute, and a range index on the office identifier
    2. Configure the table to have a range index on the name attribute, and a hash index on the office identifier
    3. Configure a hash index on the name attribute and no range index
    4. Configure a hash index on the office Identifier attribute and no range index
  22. You need to migrate 10 million records in one hour into DynamoDB. All records are 1.5KB in size. The data is evenly distributed across the partition key. How many write capacity units should you provision during this batch load?
    1. 6667
    2. 4166
    3. 5556 ( 2 write units (1 for each 1KB) * 10 million/3600 secs, refer link)
    4. 2778
  23. A meteorological system monitors 600 temperature gauges, obtaining temperature samples every minute and saving each sample to a DynamoDB table. Each sample involves writing 1K of data and the writes are evenly distributed over time. How much write throughput is required for the target table?
    1. 1 write capacity unit
    2. 10 write capacity units ( 1 write unit for 1K * 600 gauges/60 secs)
    3. 60 write capacity units
    4. 600 write capacity units
    5. 3600 write capacity units
  24. You are building a game high score table in DynamoDB. You will store each user’s highest score for each game, with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game. What’s the best DynamoDB key structure?
    1. HighestScore as the hash / only key.
    2. GameID as the hash key, HighestScore as the range key. (hash (partition) key should be the GameID, and there should be a range key for ordering HighestScore. Refer link)
    3. GameID as the hash / only key.
    4. GameID as the range / only key.
  25. You are experiencing performance issues writing to a DynamoDB table. Your system tracks high scores for video games on a marketplace. Your most popular game experiences all of the performance issues. What is the most likely problem?
    1. DynamoDB’s vector clock is out of sync, because of the rapid growth in request for the most popular game.
    2. You selected the Game ID or equivalent identifier as the primary partition key for the table. (Refer link)
    3. Users of the most popular video game each perform more read and write requests than average.
    4. You did not provision enough read or write throughput to the table.
  26. You are writing to a DynamoDB table and receive the following exception:” ProvisionedThroughputExceededException”. Though according to your Cloudwatch metrics for the table, you are not exceeding your provisioned throughput. What could be an explanation for this?
    1. You haven’t provisioned enough DynamoDB storage instances
    2. You’re exceeding your capacity on a particular Range Key
    3. You’re exceeding your capacity on a particular Hash Key (Hash key determines the partition and hence the performance)
    4. You’re exceeding your capacity on a particular Sort Key
    5. You haven’t configured DynamoDB Auto Scaling triggers
  27. Your company sells consumer devices and needs to record the first activation of all sold devices. Devices are not activated until the information is written on a persistent database. Activation data is very important for your company and must be analyzed daily with a MapReduce job. The execution time of the data analysis process must be less than three hours per day. Devices are usually sold evenly during the year, but when a new device model is out, there is a predictable peak in activation’s, that is, for a few days there are 10 times or even 100 times more activation’s than in average day. Which of the following databases and analysis framework would you implement to better optimize costs and performance for this workload? [PROFESSIONAL]
    1. Amazon RDS and Amazon Elastic MapReduce with Spot instances.
    2. Amazon DynamoDB and Amazon Elastic MapReduce with Spot instances.
    3. Amazon RDS and Amazon Elastic MapReduce with Reserved instances.
    4. Amazon DynamoDB and Amazon Elastic MapReduce with Reserved instances

References

AWS DynamoDB Throughput Capacity

AWS DynamoDB Throughput Capacity

  • AWS DynamoDB throughput capacity depends on the read/write capacity modes for processing reads and writes on the tables.
  • DynamoDB supports two types of read/write capacity modes:
    • Provisioned – maximum amount of capacity in terms of reads/writes per second that an application can consume from a table or index
    • On-demand – serves thousands of requests per second without capacity planning.
  • DynamoDB Auto Scaling helps dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
  • DynamoDB Burst capacity provides some flexibility in the per-partition throughput provisioning by providing burst capacity.
  • DynamoDB Adaptive capacity is a feature that enables DynamoDB to run imbalanced workloads indefinitely.

NOTE – Provisioned mode is covered in the AWS Certified Developer – Associate exam (DVA-C01) esp. the calculations. On-demand capacity mode is latest enhancement and does not yet feature in the exams.

Provisioned Mode

  • Provisioned mode requires you to specify the number of reads and writes per second as required by the application
  • Provisioned throughput is the maximum amount of capacity that an application can consume from a table or index
  • If the provisioned throughput capacity on a table or index is exceeded, it is subject to request throttling
  • Provisioned mode provides the following capacity units 
    • Read Capacity Units (RCU)
      • Total number of read capacity units required depends on the item size, and the consistent read model (eventually or strongly)
      • one RCU represents
        • two eventually consistent reads per second, for an item up to 4 KB in size i.e. 8 KB
        • one strongly consistent read per second for an item up to 4 KB in size i.e. 2x cost of eventually consistent reads
        • Transactional read requests require two read capacity units to perform one read per second for items up to 4 KB. i.e. 2x cost of strongly consistent reads
      • DynamoDB must consume additional read capacity units for items greater than 4 KB for e.g. for an 8 KB item size, 2 read capacity units to sustain one strongly consistent read per second, 1 read capacity unit if you choose eventually consistent reads, or 4 read capacity units for a transactional read request would be required
      • Item size is rounded off to 4 KB equivalents for e.g. a 6 KB or a 8 KB item in size would require the same RCU
    • Write Capacity Units (WCU)
      • Total number of write capacity units required depends on the item size only
      • one write per second for an item up to 1 KB in size
      • Transactional write requests require 2 write capacity units to perform one write per second for items up to 1 KB. i.e. 2x cost of general write.
      • DynamoDB must consume additional read capacity units for items greater than 1 KB for an 2 KB item size,  2 write capacity units would be required to sustain one write request per second or 4 write capacity units for a transactional write request
      • Item size is rounded off to 1 KB equivalents for e.g. a 0.5 KB or a 1 KB item would need the same WCU
  • Provisioned capacity mode might be best for use cases where you
    • Have predictable application traffic
    • Run applications whose traffic is consistent or ramps gradually
    • Can forecast capacity requirements to control costs

Provisioned Mode Examples

  • DynamoDB table with provisioned capacity of 10 RCUs and 10 WCUs can support
    • Read throughput
      • Eventual consistency = 4KB * 10 * 2 = 80KB/sec
      • Strong consistency = 4KB * 10 = 40KB/sec
      • Transactional consistency = 4KB * 10 * 1/2 = 20KB/sec
    • Write throughput
      • Eventual and Strong consistency = 10 * 1KB = 10KB/sec
      • Transaction consistency = 10 * 1KB * 1/2 = 5KB/sec
  • Capacity units required for reading and writing 15KB item
    • Read capacity units – 15KB rounded to 4 blocks of 4KB = 4 RCUs
      • Eventual consistency 4 RCUs * 1/2 = 2 RCUs
      • Strong consistency 4 RCUs * 1 = 4 RCUs
      • Transactional consistency 4 RCUs * 2 = 8 RCUs
    • Write capacity units 15KB = 15 WCUs
      • Eventual and Strong consistency 15 WCUs * 1 = 15 WCUs
      • Transactional consistency 15 WCUs * 2 = 30 RCUs

On-demand Mode

  • On-demand mode provides a flexible billing option capable of serving thousands of requests per second without capacity planning.
  • No need to specify the expected read and write throughput.
  • Charged for only the reads and writes that the application performs on the tables in terms of read request units and write request units.
  • Offers pay-per-request pricing for read and write requests so that you pay only for what you use.
  • DynamoDB adapts rapidly to accommodate the changing load.
  • DynamoDB on-demand using Request units which are similar to provisioned capacity Units.
  • On-demand mode does not support reserved capacity.
  • On-demand capacity mode might be best for use cases where you
    • Create new tables with unknown workloads
    • Have unpredictable application traffic
    • Prefer the ease of paying for only what you use

DynamoDB Throttling

  • DynamoDB distributes the data across partitions and the provisioned throughput capacity is distributed equally across these partitions and these are physical partitions and not the logical partitions based on the primary key.
  • Each partition on a DynamoDB table is subject to a hard limit of 1,000 write capacity units and 3,000 read capacity units.
  • DynamoDB would throttle requests
    • If the workload is unevenly distributed across partitions, or if the workload relies on short periods of time with high usage (a burst of read or write activity), the table might be throttled.
    • When data access is imbalanced, a hot partition can receive a higher volume of read and write traffic compared to other partitions leading to throttling errors on that partition.
    • If the write throughput capacity on the GSI is not sufficient it would lead to throttling on a GSI and this would affect the base table.
  • To avoid and handle throttling issues, you can
    • Distribute read and write operations as evenly as possible across your table. A hot partition can degrade the overall performance of your table.
    • Implement a caching solution. If the workload is mostly read access to static data, then query results can be delivered much faster if the data is in a well‑designed cache rather than in a database. DynamoDB Accelerator (DAX) is a caching service that offers fast in‑memory performance for your application. ElastiCache can be used as well.
    • Implement error retries and exponential backoff. Exponential backoff can improve an application’s reliability by using progressively longer waits between retries. If using an AWS SDK, this logic is built‑in.

DynamoDB Burst Capacity

  • DynamoDB provides some flexibility in the per-partition throughput provisioning by providing burst capacity.
  • If partition’s throughput is not fully used, DynamoDB reserves a portion of that unused capacity for later bursts of throughput to handle usage spikes.
  • DynamoDB currently retains up to 5 minutes (300 seconds) of unused read and write capacity.
  • During an occasional burst of read or write activity, these extra capacity units can be consumed quickly – even faster than the per-second provisioned throughput capacity that you’ve defined for your table.
  • DynamoDB can also consume burst capacity for background maintenance and other tasks without prior notice.

DynamoDB Adaptive Capacity

  • DynamoDB Adaptive capacity is a feature that enables DynamoDB to run imbalanced workloads indefinitely.
  • Adaptive capacity enables the application to continue read/write to hot partitions without being throttled, provided that traffic does not exceed the table’s total provisioned capacity or the partition’s maximum capacity.
  • It minimizes throttling due to throughput exceptions.
  • It also helps reduce costs by enabling the provisioning of only the needed throughput capacity.
  • Adaptive capacity is enabled automatically for every DynamoDB table, at no additional cost.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You need to migrate 10 million records in one hour into DynamoDB. All records are 1.5KB in size. The data is evenly distributed across the partition key. How many write capacity units should you provision during this batch load?
    1. 6667
    2. 4166
    3. 5556 ( 2 write units (1 for each 1KB) * 10 million/3600 secs)
    4. 2778
  2. A meteorological system monitors 600 temperature gauges, obtaining temperature samples every minute and saving each sample to a DynamoDB table. Each sample involves writing 1K of data and the writes are evenly distributed over time. How much write throughput is required for the target table?
    1. 1 write capacity unit
    2. 10 write capacity units ( 1 write unit for 1K * 600 gauges/60 secs)
    3. 60 write capacity units
    4. 600 write capacity units
    5. 3600 write capacity units
  3. A company is building a system to collect sensor data from its 36000 trucks, which is stored in DynamoDB. The trucks emit 1KB of data once every hour. How much write throughput is required for the target table. Choose an answer from the options below
    1. 10
    2. 60
    3. 600
    4. 150
  4. A company is using DynamoDB to design storage for their IOT project to store sensor data. Which combination would give the highest throughput?
    1. 5 Eventual Consistent reads capacity with Item Size of 4KB (40KB/s)
    2. 15 Eventual Consistent reads capacity with Item Size of 1KB (30KB/s)
    3. 5 Strongly Consistent reads capacity with Item Size of 4KB (20KB/s)
    4. 15 Strongly Consistent reads capacity with Item Size of 1KB (15KB/s)
  5. If your table item’s size is 3KB and you want to have 90 strongly consistent reads per second, how many read capacity units will you need to provision on the table? Choose the correct answer from the options below
    1. 90
    2. 45
    3. 10
    4. 19

References

AWS DynamoDB Security

DynamoDB Security

  • DynamoDB provides a highly durable storage infrastructure for mission-critical and primary data storage.
  • Data is redundantly stored on multiple devices across multiple facilities in a DynamoDB Region.
  • AWS handles basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery.
  • DynamoDB protects user data stored at rest and in transit between on-premises clients and DynamoDB, and between DynamoDB and other AWS resources within the same AWS Region.
  • Fine-Grained Access Control (FGAC) gives a high degree of control over data in the table.
  • FGAC helps control who (caller) can access which items or attributes of the table and perform what actions (read/write capability).
  • FGAC is integrated with IAM, which manages the security credentials and the associated permissions.
  • VPC Endpoints allow private connectivity from within a VPC only to DynamoDB.

DynamoDB Encryption

  • DynamoDB Security supports both encryption at rest and in transit.

Encryption in Transit

  • DynamoDB Data in Transit encryption can be done by encrypting sensitive data on the client side or using encrypted connections (TLS).
  • DAX supports encryption in transit, ensuring that all requests and responses between the application and the cluster are encrypted by transport level security (TLS), and connections to the cluster can be authenticated by verification of a cluster x509 certificate.
  • All the data in DynamoDB is encrypted in transit
  • communications to and from DynamoDB using the HTTPS protocol, which protects network traffic using SSL/TLS encryption.
  • Data can also be protected using client-side encryption

Encryption at Rest

  • Encryption at rest enables encryption for the data persisted (data at rest) in the DynamoDB tables.
  • Encryption at rest includes the base tables, primary key, local and global secondary indexes, streams, global tables, backups, and DynamoDB Accelerator (DAX) clusters.
  • Encryption at rest is enabled on all DynamoDB table data and cannot be disabled.
  • Encryption at rest automatically integrates with AWS KMS for managing the keys used for encrypting the tables.
  • Encryption at rest also supports the following KMS keys
    • AWS owned CMK – Default encryption type. The key is owned by DynamoDB (no additional charge).
    • AWS managed CMK – the key is stored in your account and is managed by AWS KMS (AWS KMS charges apply).
    • Customer managed CMK – the key is stored in your account and is created, owned, and managed by you. You have full control over the KMS key (AWS KMS charges apply).
  • Encryption at rest can be enabled only for a new table and encryption keys can be switched for an existing table.
  • DynamoDB streams can be used with encrypted tables and are always encrypted with a table-level encryption key.
  • On-Demand Backups of encrypted DynamoDB tables are encrypted using S3’s Server-Side Encryption
  • Encryption at rest encrypts the data using 256-bit AES encryption.
  • DAX clusters cannot use customer-managed key encryption.

DynamoDB Encryption Client

  • DynamoDB Encryption Client is a software library that helps protect the table data before sending it to DynamoDB.
  • Encrypting the sensitive data in transit and at rest helps ensure that the plaintext data isn’t available to any third party, including AWS.
  • helps in end-to-end data encryption.
  • encrypts attribute values that can be controlled but do not encrypt the entire table, attribute names, or primary key.

VPC Endpoints

  • By default, communications to and from DynamoDB use the HTTPS protocol, which protects network traffic by using SSL/TLS encryption.
  • VPC endpoint for DynamoDB enables EC2 instances in the VPC to use their private IP addresses to access DynamoDB with no exposure to the public internet.
  • Traffic between the VPC and the AWS service does not leave the Amazon network.
  • EC2 instances do not require public IP addresses, an internet gateway, a NAT device, or a virtual private gateway in the VPC.
  • VPC Endpoint Policies to control access to DynamoDB.

DynamoDB VPC Endpoint

DynamoDB Security Best Practices

  • DynamoDB encrypts at rest all user data stored in tables, indexes, streams, and backups using encryption keys stored in KMS.
  • DynamoDB can be configured to use an AWS owned key (default encryption type), an AWS managed key, or a customer managed key to encrypt user data.
  • Use IAM Roles to authenticate access to DynamoDB
  • Use VPC endpoint and policies to access DynamoDB
  • DynamoDB Encryption Client is a software library that helps in client-side encryption and protects the table data before you send it to DynamoDB.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What are the services supported by VPC endpoints, using the Gateway endpoint type?
    1. Amazon EFS
    2. Amazon DynamoDB
    3. Amazon Glacier
    4. Amazon SQS

References

AWS_DynamoDB_Security

AWS DynamoDB Secondary Indexes

DynamoDB Secondary Indexes - GSI vs LSI

AWS DynamoDB Secondary Indexes

  • DynamoDB provides fast access to items in a table by specifying primary key values
  • DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
  • DynamoDB Secondary indexes
    • is a data structure that contains a subset of attributes from a table.
    • is associated with exactly one table, from which it obtains its data.
    • requires an alternate key for the index partition key and sort key.
    • additionally can define projected attributes that are copied from the base table into the index along with the primary key attributes.
    • is automatically maintained by DynamoDB.
    • indexes on that table are also updated for any addition, modification, or deletion of items in the base table.
    • helps reduce the size of the data as compared to the main table, depending upon the project attributes, and hence helps improve provisioned throughput performance
    • are automatically maintained as sparse objects. Items will only appear in an index if they exist in the table on which the index is defined, making queries an index very efficient
  • DynamoDB Secondary indexes support two types
    • Global secondary index – an index with a partition key and a sort key that can be different from those on the base table.
    • Local secondary index – an index that has the same partition key as the base table, but a different sort key.

Global Secondary Indexes – GSI

  • DynamoDB creates and maintains indexes for the primary key attributes for efficient access to data in the table, which allows applications to quickly retrieve data by specifying primary key values.
  • Global Secondary Indexes – GSI are indexes that contain partition or composite partition-and-sort keys that can be different from the keys in the table on which the index is based.
  • Global secondary index is considered “global” because queries on the index can span all items in a table, across all partitions.
  • Multiple secondary indexes can be created on a table, and queries issued against these indexes.
  • Applications benefit from having one or more secondary keys available to allow efficient access to data with attributes other than the primary key.
  • GSIs support non-unique attributes, which increases query flexibility by enabling queries against any non-key attribute in the table
  • GSIs support eventual consistency. DynamoDB automatically handles item additions, updates, and deletes in a GSI when corresponding changes are made to the table asynchronously
  • Data in a secondary index consists of GSI alternate key, primary key and attributes that are projected, or copied, from the table into the index.
  • Attributes that are part of an item in a table, but not part of the GSI key, the primary key of the table, or projected attributes are not returned on querying the GSI index.
  • GSIs manage throughput independently of the table they are based on and the provisioned throughput for the table and each associated GSI needs to be specified at the creation time.
    • Read provisioned throughput
      • provides one Read Capacity Unit with two eventually consistent reads per second for items < 4KB in size.
      • provides one Write Capacity Unit with one write per second for items < 1KB in size.
    • Write provisioned throughput
      • consumes 1 write capacity unit if,
        • a new item is inserted into the table
        • existing item is deleted from the table
        • existing items are updated for projected attributes
      • consumes 2 write capacity units if
        • existing item is updated for key attributes, which results in deletion and addition of the new item into the index
  • Throttling on a GSI affects the base table depending on whether the throttling is for read or write activity:
    • When a GSI has insufficient read capacity, the base table isn’t affected.
    • When a GSI has insufficient write capacity, write operations won’t succeed on the base table or any of its GSIs.

Local Secondary Indexes (LSI)

  • Local secondary indexes are indexes that have the same partition key as the table, but a different sort key.
  • Local secondary index is “local” cause every partition of a local secondary index is scoped to a table partition that has the same partition key.
  • LSI allows search using a secondary index in place of the sort key, thus expanding the number of attributes that can be used for queries that can be conducted efficiently
  • LSI is updated automatically when the primary index is updated and reads support strong, eventual, and transactional consistency options.
  • LSIs can only be queried via the Query API
  • LSIs cannot be added to existing tables at this time
  • LSIs cannot be modified once it is created at this time
  • LSI cannot be removed from a table once they are created at this time
  • LSI consumes provisioned throughput capacity as part of the table with which it is associated
    • Read Provisioned throughput
      • if data read is indexed and projected attributes
        • provides one Read Capacity Unit with one strongly consistent read (or two eventually consistent reads) per second for items < 4KB
        • data size includes the index and projected attributes only
      • if data read is indexed and a non-projected attribute
        • consumes double the read capacity, with one to read from the index and one to read from the table with the entire data and not just the non-projected attribute
    • Write provisioned throughput
      • consumes 1 write capacity unit if,
        • a new item is inserted into the table
        • existing item is deleted from the table
        • existing items are updated for project attributes
      • consumes 2 write capacity units if
        • existing item is updated for key attributes, which results in deletion and addition of the new item into the index

Global Secondary Index vs Local Secondary Index

DynamoDB Secondary Indexes - GSI vs LSI

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. In DynamoDB, a secondary index is a data structure that contains a subset of attributes from a table, along with an alternate key to support ____ operations.
    1. None of the above
    2. Both
    3. Query
    4. Scan
  2. In regard to DynamoDB, what is the Global secondary index?
    1. An index with a partition and sort key that can be different from those on the table
    2. An index that has the same sort key as the table, but a different partition key
    3. An index that has the same partition key and sort key as the table
    4. An index that has the same partition key as the table, but a different sort key
  3. In regard to DynamoDB, can I modify the index once it is created?
    1. Yes, if it is a primary hash key index
    2. Yes, if it is a Global secondary index (AWS now allows you to modify global secondary indexes after creation)
    3. No
    4. Yes, if it is a local secondary index
  4. When thinking of DynamoDB, what is true of Global Secondary Key properties?
    1. Both the partition key and sort key can be different from the table.
    2. Only the partition key can be different from the table.
    3. Either the partition key or the sort key can be different from the table, but not both.
    4. Only the sort key can be different from the table.

References

AWS DynamoDB Advanced Features

AWS DynamoDB Advanced Features

  • DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
  • DynamoDB Time to Live – TTL enables a per-item timestamp to determine when an item is no longer needed.
  • DynamoDB cross-region replication allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
  • DynamoDB Global Tables is a new multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table.
  • DynamoDB Triggers (just like database triggers) are a feature that allows the execution of custom actions based on item-level updates on a table.
  • DynamoDB Accelerator – DAX is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from ms to µs – even at millions of requests per second.
  • VPC Gateway Endpoints provide private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.

DynamoDB Secondary Indexes

  • DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
  • Global secondary index – an index with a partition key and a sort key that can be different from those on the base table.
  • Local secondary index – an index that has the same partition key as the base table, but a different sort key.

DynamoDB TTL

  • DynamoDB Time to Live (TTL) enables a per-item timestamp to determine when an item is no longer needed.
  • After the date and time of the specified timestamp, DynamoDB deletes the item from the table without consuming any write throughput.
  • DynamoDB TTL is provided at no extra cost and can help reduce data storage by retaining only required data.
  • Items that are deleted from the table are also removed from any local secondary index and global secondary index in the same way as a DeleteItem operation.
  • Expired items get removed from the table and indexes within about 48 hours.
  • DynamoDB Stream tracks the delete operation as a system delete and not a regular delete.
  • TTL is useful if the stored items lose relevance after a specific time. for e.g.
    • Remove user or sensor data after a year of inactivity in an application
    • Archive expired items to an S3 data lake via DynamoDB Streams and AWS Lambda.
    • Retain sensitive data for a certain amount of time according to contractual or regulatory obligations.

DynamoDB Cross-region Replication

  • DynamoDB cross-region replication allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
  • Writes to the table will be automatically propagated to all replicas.
  • Cross-region replication currently supports a single master mode. A single master has one master table and one or more replica tables.
  • Read replicas are updated asynchronously as DynamoDB acknowledges a write operation as successful once it has been accepted by the master table. The write will then be propagated to each replica with a slight delay.
  • Cross-region replication can be helpful in scenarios
    • Efficient disaster recovery, in case a data center failure occurs.
    • Faster reads, for customers in multiple regions by delivering data faster by reading a DynamoDB table from the closest AWS data center.
    • Easier traffic management, to distribute the read workload across tables and thereby consume less read capacity in the master table.
    • Easy regional migration, by promoting a read replica to master
    • Live data migration, to replicate data and when the tables are in sync, switch the application to write to the destination region
  • Cross-region replication costing depends on
    • Provisioned throughput (Writes and Reads)
    • Storage for the replica tables.
    • Data Transfer across regions
    • Reading data from DynamoDB Streams to keep the tables in sync.
    • Cost of EC2 instances provisioned, depending upon the instance types and region, to host the replication process.
  • NOTE : Cross Region replication on DynamoDB was performed defining AWS Data Pipeline job which used EMR internally to transfer data before the DynamoDB streams and out-of-box cross-region replication support.

DynamoDB Global Tables

  • DynamoDB Global Tables is a multi-master, active-active, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • Applications can now perform reads and writes to DynamoDB in AWS regions around the world, with changes in any region propagated to every region where a table is replicated.
  • Global Tables help in building applications to advantage of data locality to reduce overall latency.
  • Global Tables supports eventual consistency & strong consistency for same region reads, but only eventual consistency for cross-region reads.
  • Global Tables replicates data among regions within a single AWS account and currently does not support cross-account access.
  • Global Tables uses the Last Write Wins approach for conflict resolution.
  • Global Tables requires DynamoDB streams enabled with New and Old image settings.

DynamoDB Streams

  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table.
  • DynamoDB Streams stores the data for the last 24 hours, after which they are erased.
  • DynamoDB Streams maintains an ordered sequence of the events per item however, sequence across items is not maintained.
  • Example
    • For e.g., suppose that you have a DynamoDB table tracking high scores for a game and that each item in the table represents an individual player. If you make the following three updates in this order:
      • Update 1: Change Player 1’s high score to 100 points
      • Update 2: Change Player 2’s high score to 50 points
      • Update 3: Change Player 1’s high score to 125 points
    • DynamoDB Streams will maintain the order for Player 1 score events. However, it would not maintain order across the players. So Player 2 score event is not guaranteed between the 2 Player 1 events
  • DynamoDB Streams APIs help developers consume updates and receive the item-level data before and after items are changed.
  • DynamoDB Streams allow reads at up to twice the rate of the provisioned write capacity of the DynamoDB table.
  • DynamoDB Streams have to be enabled on a per-table basis.
  • DynamoDB streams support Encryption at rest to encrypt the data.
  • DynamoDB Streams is designed for No Duplicates so that every update made to the table will be represented exactly once in the stream.
  • DynamoDB Streams writes stream records in near-real time so that applications can consume these streams and take action based on the contents.
  • DynamoDB streams can be used for multi-region replication to keep other data stores up-to-date with the latest changes to DynamoDB or to take actions based on the changes made to the table
  • DynamoDB steam records can be processed using Kinesis Data Streams, Lambda, or KCL application.

DynamoDB Triggers

  • DynamoDB Triggers (just like database triggers) are a feature that allows the execution of custom actions based on item-level updates on a table.
  • DynamoDB triggers can be used in scenarios like sending notifications, updating an aggregate table, and connecting DynamoDB tables to other data sources.
  • DynamoDB Trigger flow
    • Custom logic for a DynamoDB trigger is stored in an AWS Lambda function as code.
    • A trigger for a given table can be created by associating an AWS Lambda function to the stream (via DynamoDB Streams) on a table.
    • When the table is updated, the updates are published to DynamoDB Streams.
    • In turn, AWS Lambda reads the updates from the associated stream and executes the code in the function.

DynamoDB Backup and Restore

  • DynamoDB on-demand backup helps create full backups of the tables for long-term retention, and archiving for regulatory compliance needs.
  • Backup and restore actions run with no impact on table performance or availability.
  • Backups are preserved regardless of table deletion and retained until they are explicitly deleted.
  • On-demand backups are cataloged, and discoverable.
  • On-demand backups can be created using
    • DynamoDB
      • DynamoDB on-demand backups cannot be copied to a different account or Region.
    • AWS Backup (Recommended)
      • is a fully managed data protection service that makes it easy to centralize and automate backups across AWS services, in the cloud, and on-premises
      • provides enhanced backup features
      • can configure backup schedule, policies and monitor activity for the AWS resources and on-premises workloads in one place.
      • can copy the on-demand backups across AWS accounts and Regions,
      • encryption using an AWS KMS key that is independent of the DynamoDB table encryption key.
      • apply write-once-read-many (WORM) setting for the backups using the AWS Backup Vault Lock policy.
      • add cost allocation tags to on-demand backups, and
      • transition on-demand backups to cold storage for lower costs.

DynamoDB PITR – Point-In-Time Recovery

  • DynamoDB point-in-time recovery – PITR enables automatic, continuous, incremental backup of the table with per-second granularity.
  • PITR-enabled tables that were deleted can be recovered in the preceding 35 days and restored to their state just before they were deleted.
  • PITR helps protect against accidental writes and deletes.
  • PITR can back up tables with hundreds of terabytes of data with no impact on the performance or availability of the production applications.

DynamoDB Accelerator – DAX

  • DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second.
  • DAX is intended for high-performance read applications. As a write-through cache, DAX writes directly so that the writes are immediately reflected in the item cache.
  • DAX as a managed service handles the cache invalidation, data population, or cluster management.
  • DAX provides API-compatible with DynamoDB. Therefore, it requires only minimal functional changes to use with an existing application.
  • DAX saves costs by reducing the read load (RCU) on DynamoDB.
  • DAX helps prevent hot partitions.
  • DAX only supports eventual consistency, and strong consistency requests are passed-through to DynamoDB.
  • DAX is fault-tolerant and scalable.
  • DAX cluster has a primary node and zero or more read-replica nodes. Upon a failure for a primary node, DAX will automatically failover and elect a new primary. For scaling, add or remove read replicas.
  • DAX supports server-side encryption.
  • DAX also supports encryption in transit, ensuring that all requests and responses between the application and the cluster are encrypted by TLS, and connections to the cluster can be authenticated by verification of a cluster x509 certificate

DynamoDB Accelerator - DAX

VPC Endpoints

  • VPC endpoints for DynamoDB improve privacy and security, especially those dealing with sensitive workloads with compliance and audit requirements, by enabling private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.
  • VPC endpoints for DynamoDB support IAM policies to simplify DynamoDB access control, where access can be restricted to a specific VPC endpoint.
  • VPC endpoints can be created only for Amazon DynamoDB tables in the same AWS Region as the VPC
  • DynamoDB Streams cannot be accessed using VPC endpoints for DynamoDB.

VPC Gateway Endpoints

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What are the services supported by VPC endpoints, using Gateway endpoint type? Choose 2 answers
    1. Amazon S3
    2. Amazon EFS
    3. Amazon DynamoDB
    4. Amazon Glacier
    5. Amazon SQS
  2. A company has setup an application in AWS that interacts with DynamoDB. DynamoDB is currently responding in milliseconds, but the application response guidelines require it to respond within microseconds. How can the performance of DynamoDB be further improved? [SAA-C01]
    1. Use ElastiCache in front of DynamoDB
    2. Use DynamoDB inbuilt caching
    3. Use DynamoDB Accelerator
    4. Use RDS with ElastiCache instead

References

AWS Aurora Global Database vs. DynamoDB Global Tables

AWS Aurora Global Database vs DynamoDB Global Tables

AWS Aurora Global Database vs. DynamoDB Global Tables

Aurora Global Database

  • Aurora Global Database provides a relational database supporting MySQL and PostgreSQL
  • Aurora Global Database consists of one primary AWS Region where the data is mastered, and up to five read-only, secondary AWS Regions.
  • Aurora cluster in the primary AWS Region where the data is mastered performs both read and write operations. The clusters in the secondary Regions enable low-latency reads.
  • Aurora replicates data to the secondary AWS Regions with a typical latency of under a second.
  • Secondary clusters can be scaled independently by adding one or more DB instances (Aurora Replicas) to serve read-only workloads.
  • Aurora Global Database uses dedicated infrastructure to replicate the data, leaving database resources available entirely to serve applications.
  • Applications with a worldwide footprint can use reader instances in the secondary AWS Regions for low-latency reads.
  • Typical cross-region replication takes less than 1 second.
  • In case of a disaster or an outage, one of the clusters in a secondary AWS Region can be promoted to take full read/write workloads in under a minute.
  • However, the process is not automatic. If the primary region becomes unavailable, you can manually remove a secondary region from an Aurora Global Database and promote it to take full reads and writes. You will also need to point the application to the newly promoted region.

DynamoDB Global Tables

  • DynamoDB Global tables provide NoSQL database.
  • DynamoDB Global tables provide a fully managed, multi-Region, and multi-active database that delivers fast, local, read and write performance for massively scaled, global applications.
  • Global tables replicate the DynamoDB tables automatically across the choice of AWS Regions and enable reads and writes on all instances.
  • DynamoDB global table consists of multiple replica tables (one per AWS Region). Every replica has the same table name and the same primary key schema. When an application writes data to a replica table in one Region, DynamoDB propagates the write to the other replica tables in the other AWS Regions automatically.
  • Global tables enable the read and write of data locally providing single-digit-millisecond latency for the globally distributed application at any scale. It provides asynchronous replication with approximately 1-second replication latency for tables between two or more Regions.
  • DynamoDB Global tables are designed for 99.999% availability.
  • DynamoDB Global tables enable the applications to stay highly available even in the unlikely event of isolation or degradation of an entire Region, your application can redirect to a different Region and perform reads and writes against a different replica table.

AWS Aurora Global Database vs DynamoDB Global Tables

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company needs to implement a relational database with a multi-region disaster recovery Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of 1 minute. Which AWS solution can achieve this?
    1. Amazon Aurora Global Database
    2. Amazon DynamoDB global tables
    3. Amazon RDS for MySQL with Multi-AZ enabled
    4. Amazon RDS for MySQL with a cross-Region snapshot copy

References

AWS DynamoDB Best Practices

AWS DynamoDB Best Practices

Primary Key Design

  • Primary key uniquely identifies each item in a DynamoDB table and can be simple (a partition key only) or composite (a partition key combined with a sort key).
  • Partition key portion of a table’s primary key determines the logical partitions in which a table’s data is stored, which in turn affects the underlying physical partitions.
  • Partition key should have many unique values.
  • Distribute reads / writes uniformly across partitions to avoid hot partitions
  • Store hot and cold data in separate tables
  • Consider all possible query patterns to eliminate the use of scans and filters.
  • Choose a sort key depending on the application’s needs.
  • Avoid hot keys and hot partitions – a partition key design that doesn’t distribute I/O requests evenly can create “hot” partitions that result in throttling and use the provisioned I/O capacity inefficiently.

Secondary Indexes

  • Use indexes based on the application’s query patterns.
  • Local Secondary Indexes – LSIs
    • Use primary key or LSIs when strong consistency is desired
    • Watch for expanding item collections (10 GB size limit!)
  • Global Secondary Indexes – GSIs
    • Use GSIs for finer control over throughput or when your application needs to query using a different partition key.
    • Can be used for eventually consistent read replicas – set up a global secondary index that has the same key schema as the parent table, with some or all of the non-key attributes projected into it.
  • Project fewer attributes – As secondary indexes consume storage and provisioned throughput, keep the index size as small as possible by projecting only required attributes as it would provide greater performance
  • Keep the number of indexes to a minimum – don’t create secondary indexes on attributes that aren’t queried often. Indexes that are seldom used contribute to increased storage and I/O costs without improving application performance.
  • Sparse indexes – DynamoDB indexes are Sparse and it writes a corresponding index entry only if the index sort key value is present in the item. If the sort key doesn’t appear in every table item, the index will do contain the item.

Large Items and Attributes

  • DynamoDB currently limits the size of each item (400 KB) that is stored in a table, which includes both attribute names and values binary length.
  • Use shorter (yet intuitive!) attribute names
  • Keep item size small.
  • Use compression (GZIP or LZO).
  • Split large attributes across multiple items.
  • Store metadata in DynamoDB and large BLOBs or attributes in S3.

Querying and Scanning Data

  • Avoid scans and filters – Scan operations are less efficient than other operations in DynamoDB. A Scan operation always scans the entire table or secondary index. It then filters out values to provide the result, essentially adding the extra step of removing data from the result set.
  • Use eventual consistency for reads.

Time Series Data

  • Use a table per day, week, month, etc for storing time series data – create one table per period, provisioned with the required read and write capacity and the required indexes.
  • Before the end of each period, prebuild the table for the next period. Just as the current period ends, direct event traffic to the new table. Assign names to the tables that specify the periods they have recorded.
  • As soon as a table is no longer being written to, reduce its provisioned write capacity to a lower value (for example, 1 WCU), and provision whatever read capacity is appropriate. Reduce the provisioned read capacity of earlier tables as they age.
  • Archive or drop the tables whose contents are rarely or never needed.
  • Dropping tables is the fastest, simplest and cost-effective method if all the items are to be deleted from the table, without spending time in scanning and deleting each item.

Other Best Practices

  • Burst Capacity reserves a portion of unused capacity (5 mins.) for later bursts of throughput to handle usage spikes.
  • Adaptive capacity helps run imbalanced workloads indefinitely. It minimizes throttling due to throughput exceptions and reduces cost by enabling you to provision only the needed throughput capacity.
  • Deletion protection can keep the tables from being accidentally deleted.

Reference

DynamoDB_Best_Practices

AWS Storage Options – RDS, DynamoDB & Database on EC2

AWS Storage Options Whitepaper with RDS, DynamoDB & Database on EC2 Cont.

Provides a brief summary for the Ideal Use cases, Anti-Patterns and other factors for Amazon RDS, DynamoDB & Databases on EC2 storage options

Amazon RDS

  • RDS is a web service that provides the capabilities of MySQL, Oracle, MariaDB, Postgres or Microsoft SQL Server relational database as a managed, cloud-based service
  • RDS eliminates much of the administrative overhead associated with launching, managing, and scaling your own relational database on Amazon EC2 or in another computing environment.

Ideal Usage Patterns

  • RDS is a great solution for cloud-based fully-managed relational database
  • RDS is also optimal for new applications with structured data that requires more sophisticated querying and joining capabilities than that provided by Amazon’s NoSQL database offering, DynamoDB.
  • RDS provides full compatibility with the databases supported and direct access to native database engines, code and libraries and is ideal for existing applications that rely on these databases

Anti-Patterns

  • Index and query-focused data
    • If the applications don’t require advanced features such as joins and complex transactions and is more oriented toward indexing and querying data, DynamoDB would be more appropriate for this needs
  • Numerous BLOBs
    • If the application makes heavy use of files (audio files, videos, images, etc), it is a better choice to use S3 to store the objects instead of database engines Blob feature and use RDS or DynamoDB only to save the metadata
  • Automated scalability
    • RDS provides pushbutton scaling and it only scales up and has limited scale out ability. If fully-automated scaling is needed, DynamoDB may be a better choice.
  • Complete control
    • RDS does not provide admin access and does not enable the full feature set of the database engines.
    • So if the application requires complete, OS-level control of the database server with full root or admin login privileges, a self-managed database on EC2 may be a better match.
  • Other database platforms
    • RDS, at this time, provides a MySQL, Oracle, MariaDB, PostgreSQL and SQL Server databases.
    • If any other database platform (such as IBM DB2, Informix, or Sybase) is needed, it should be deployed on a self-managed database on an EC2 instance by using a relational database AMI, or by installing database software on an EC2 instance.

Performance

  • RDS Provisioned IOPS, where the IOPS can be specified when the instance is launched and is guaranteed over the life of the instance, provides a high-performance storage option designed to deliver fast, predictable, and consistent performance for I/O intensive transactional database workload

Durability and Availability

  • RDS leverages Amazon EBS volumes as its data store
  • RDS provides database backups, for enhanced durability, which are replicated across multiple AZ’s
    • Automated backups
      • If enabled, RDS will automatically perform a full daily backup of your data during the specified backup window, and will also capture DB transaction logs
    • User initiated backups
      • User can initiate backups at time and they are not deleted unless deleted explicitly by the user
  • RDS Multi AZ’s feature enhances both the durability and the availability of the database by synchronously replicating the data between a primary RDS DB instance and a standby instance in another Availability Zone, which prevents data loss,
  • RDS provides a DNS endpoint and in case of an failure on the primary, it automatically fails over to the standby instance
  • RDS also allows Read replicas for the supported databases, which are replicated asynchronously

Cost Model

  • RDS offers a tiered pricing structure, based on the size of the database instance, the deployment type (Single-AZ/Multi-AZ), and the AWS region.
  • Pricing for RDS is based on several factors: the DB instance hours (per hour), the amount of provisioned database storage (per GB-month and per million I/O requests), additional backup storage (per GB-month), and data transfer in/out (per GB per month)

Scalability and Elasticity

  • RDS resources can be scaled elastically in several dimensions: database storage size, database storage IOPS rate, database instance compute capacity, and the number of read replicas
  • RDS supports “pushbutton scaling” of both database storage and compute resources. Additional storage can either be added immediately or during the next maintenance cycle
  • RDS for MySQL also enables you to scale out beyond the capacity of a single database deployment for read-heavy database workloads by creating one or more read replicas.
  • Multiple RDS instances can also be configured to leverage database partitioning or sharding to spread the workload over multiple DB instances, achieving even greater database scalability and elasticity.

Interfaces

  • RDS APIs and the AWS Management Console provide a management interface that allows you to create, delete, modify, and terminate RDS DB instances; to create DB snapshots; and to perform point-in-time restores
  • There is no AWS data API for Amazon RDS.
  • Once a database is created, RDS provides a DNS endpoint for the database which can be used to connect to the database.
  • Endpoint does not change over the lifetime of the instance even during the failover in case of Multi-AZ configuration

Amazon DynamoDB

  • Amazon DynamoDB is a fast, fully-managed NoSQL database service that makes it simple and cost-effective to store and retrieve any amount of data, and serve any level of request traffic.
  • DynamoDB being a managed service helps offload the administrative burden of operating and scaling a highly-available distributed database cluster.
  • DynamoDB helps meet the latency and throughput requirements of highly demanding applications by providing extremely fast and predictable performance with seamless throughput and storage scalability.
  • DynamoDB provides both eventually-consistent reads (by default), and strongly-consistent reads (optional), as well as implicit item-level transactions for item put, update, delete, conditional operations, and increment/decrement.
  • Amazon DynamoDB handles the data as below :-
    • DynamoDB stores structured data in tables, indexed by primary key, and allows low-latency read and write access to items.
    • DynamoDB supports three data types: number, string, and binary, in both scalar and multi-valued sets.
    • Tables do not have a fixed schema, so each data item can have a different number of attributes.
    • Primary key can either be a single-attribute hash key or a composite hash-range key.
    • Local secondary indexes provide additional flexibility for querying against attributes other than the primary key.

Ideal Usage Patterns

  • DynamoDB is ideal for existing or new applications that need a flexible NoSQL database with low read and write latencies, and the ability to scale storage and throughput up or down as needed without code changes or downtime.
  • Use cases require a highly available and scalable database because downtime or performance degradation has an immediate negative impact on an organization’s business. for e.g. mobile apps, gaming, digital ad serving, live voting and audience interaction for live events, sensor networks, log ingestion, access control for web-based content, metadata storage for S3 objects, e-commerce shopping carts, and web session management

Anti-Patterns

  • Structured data with Join and/or Complex Transactions
    • If the application uses structured data and required joins, complex transactions or other relationship infrastructure provided by traditional database platforms, it is better to use RDS or Database installed on an EC2 instance
  • Large Blob data
    • If the application uses large blob data for e.g. media, files, videos etc., it is better to use S3 to store the objects and use DynamoDB to store metadata for e.g. name, size, content-type etc
  • Large Objects with Low I/O rate
    • DynamoDB uses SSD drives and is optimized for workloads with a high I/O rate per GB stored. If the applications stores very large amounts of data that are infrequently accessed, S3 might be a better choice
  • Prewritten application with databases
    • For Porting an existing application using databases, RDS or database installed on the EC2 instance would be a better and seamless solution

Performance

  • SSDs and limited indexing on attributes provides high throughput and low latency and drastically reduces the cost of read and write operations.
  • Predictable performance can be achieved by defining the provisioned throughput capacity required for a given table.
  • DynamoDB handles the provisioning of resources to achieve the requested throughput rate, taking away the burden to think about instances, hardware, memory, and other factors that can affect an application’s throughput rate.
  • Provisioned throughput capacity reservations are elastic and can be increased or decreased on demand.

Durability and Availability

  • DynamoDB has built-in fault tolerance that automatically and synchronously replicates data across three AZ’s in a region for high availability and to help protect data against individual machine, or even facility failures.

Cost Model

  • DynamoDB has three pricing components: provisioned throughput capacity (per hour), indexed data storage (per GB per month), data transfer in or out (per GB per month)

Scalability and Elasticity

  • DynamoDB is both highly-scalable and elastic.
  • DynamoDB provides unlimited storage capacity, and the service automatically allocates more storage as the demand increases
  • Data is automatically partitioned and re-partitioned as needed, while the use of SSDs provides predictable low-latency response times at any scale.
  • DynamoDB is also elastic, in that you can simply “dial-up” or “dial-down” the read and write capacity of a table as your needs change.

Interfaces

  • DynamoDB provides a low-level REST API, as well as higher-level SDKs in different languages
  • APIs provide both a management and data interface for Amazon DynamoDB, that enable table management (creating, listing, deleting, and obtaining metadata) and working with attributes (getting, writing, and deleting attributes; query using an index, and full scan).

Databases on EC2

  • EC2 with EBS volumes allows hosting a self managed relational database
  • Ready to use, prebuilt AMIs are also available from leading database solutions

Ideal Usage Patterns

  • Self managed database on EC2 is an ideal scenario for users whose application requires a specific traditional relational database not supported by Amazon RDS for e.g. IBM DB2, Informix, or Sybase
  • Users or applications that require a maximum level of administrative control and configurability which is not provided by RDS

Anti-Patterns

  • Index and query-focused data
    • If the applications don’t require advanced features such as joins and complex transactions and is more oriented toward indexing and querying data, DynamoDB would be more appropriate for this needs
  • Numerous BLOBs
    • If the application makes heavy use of files (audio files, videos, images, and so on), it is a better choice to use S3 to store the objects instead of database engines Blob feature and use RDS or DynamoDB only to save the metadata
  • Automated scalability
    • Relational databases on EC2 leverages the scalability and elasticity of the underlying AWS platform, but this requires system administrators or DBAs to perform a manual or scripted task. If you need pushbutton scaling or fully-automated scaling, DynamoDB or RDS may be a better choice.
  • RDS supported database platforms
    • If the application using RDS supported database engine and all the features are available, RDS would be a better choice instead of self managed relational database on EC2

Performance

  • Performance depends on the size of the underlying EC2 instance, the number and configuration of the EBS volumes and the database itself
  • Performance can be increased by scaling up memory and compute resources by choosing a larger Amazon EC2 instance size.
  • For database storage, it is usually best to use EBS Provisioned IOPS volumes. To scale up I/O performance, the Provisioned IOPS can be increased, the number of EBS volumes changed, or use software RAID 0 (disk striping) across multiple EBS volumes, which will aggregate total IOPS and bandwidth.

Durability & Availability

  • As the database on EC2 uses EBS as storage, it has the same durability and availability provided by EBS and can be further enhanced by using EBS snapshots or by using third-party database backup utilities (such as Oracle’s RMAN) to store database backups in Amazon S3

Cost Model

  • Cost for running a database on EC2 instance is mainly determined by the size and the number of EC2 instance running, the size of the EBS volume used for database storage and any third party licensing cost for the database

Scalability & Elasticity

  • Users of traditional relational database solutions on Amazon EC2 can take advantage of the scalability and elasticity of the underlying AWS platform by creating AMI and spawning multiple instances

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following are use cases for Amazon DynamoDB? Choose 3 answers
    1. Storing BLOB data.
    2. Managing web sessions
    3. Storing JSON documents
    4. Storing metadata for Amazon S3 objects
    5. Running relational joins and complex updates.
    6. Storing large amounts of infrequently accessed data.
  2. A client application requires operating system privileges on a relational database server. What is an appropriate configuration for highly available database architecture?
    1. A standalone Amazon EC2 instance
    2. Amazon RDS in a Multi-AZ configuration
    3. Amazon EC2 instances in a replication configuration utilizing a single Availability Zone
    4. Amazon EC2 instances in a replication configuration utilizing two different Availability Zones
  3. You are developing a new mobile application and are considering storing user preferences in AWS, which would provide a more uniform cross-device experience to users using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size. Additionally 5 million customers are expected to use the application on a regular basis. The solution needs to be cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements?
    1. Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to manage security and access credentials
    2. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the DynamoDB table. Utilize STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access (DynamoDB provides high availability as it synchronously replicates data across three facilities within an AWS Region and scalability as it is designed to scale its provisioned throughput up or down while still remaining available. Also suitable for storing user preference data)
    3. Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials.
    4. Store the user preference data in S3 Setup a DynamoDB table with an item for each user and an item attribute pointing to the user’ S3 object. The mobile application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.
  4. A customer is running an application in US-West (Northern California) region and wants to setup disaster recovery failover to the Asian Pacific (Singapore) region. The customer is interested in achieving a low Recovery Point Objective (RPO) for an Amazon RDS multi-AZ MySQL database instance. Which approach is best suited to this need?
    1. Synchronous replication
    2. Asynchronous replication
    3. Route53 health checks
    4. Copying of RDS incremental snapshots
  5. You are designing a file -sharing service. This service will have millions of files in it. Revenue for the service will come from fees based on how much storage a user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private. How do you achieve all of these goals in a way that is economical and can scale to millions of users?
    1. Store all files in Amazon Simple Storage Service (53). Create a bucket for each user. Store metadata in the filename of each object, and access it with LIST commands against the S3 API.
    2. Store all files in Amazon 53. Create Amazon DynamoDB tables for the corresponding key -value pairs on the associated metadata, when objects are uploaded.
    3. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Use a database running in Amazon Relational Database Service (RDS) to store the metadata.
    4. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.
  6. Company ABCD has recently launched an online commerce site for bicycles on AWS. They have a “Product” DynamoDB table that stores details for each bicycle, such as, manufacturer, color, price, quantity and size to display in the online store. Due to customer demand, they want to include an image for each bicycle along with the existing details. Which approach below provides the least impact to provisioned throughput on the “Product” table?
    1. Serialize the image and store it in multiple DynamoDB tables
    2. Create an “Images” DynamoDB table to store the Image with a foreign key constraint to the “Product” table
    3. Add an image data type to the “Product” table to store the images in binary format
    4. Store the images in Amazon S3 and add an S3 URL pointer to the “Product” table item for each image