Amazon DynamoDB is a fully managed NoSQL database service that
makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.
provides fast and predictable performance with seamless scalability
DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS, without having to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
DynamoDB tables do not have fixed schemas, and table consists of items and each item may have a different number of attributes.
DynamoDB synchronously replicates data across three facilities in an AWS Region, giving high availability and data durability.
DynamoDB supports fast in-place updates. A numeric attribute can be incremented or decremented in a row using a single API call
DynamoDB uses proven cryptographic methods to securely authenticate users and prevent unauthorized data access
Durability, performance, reliability, and security are built in, with SSD (solid state drive) storage and automatic 3-way replication.
DynamoDB Secondary indexes
add flexibility to the queries, without impacting performance.
are automatically maintained as sparse objects, items will only appear in an index if they exist in the table on which the index is defined making queries against an index very efficient
DynamoDB throughput and single-digit millisecond latency makes it a great fit for gaming, ad tech, mobile, and many other applications
ElastiCache can be used in front of DynamoDB in order to offload high amount of reads for non frequently changed data
Automatically scales horizontally
runs exclusively on Solid State Drives (SSDs).
SSDs help achieve the design goals of predictable low-latency response times for storing and accessing data at any scale.
SSDs High I/O performance enables it to serve high-scale request workloads cost efficiently, and to pass this efficiency along in low request pricing
allows provisioned table reads and writes
Scale up throughput when needed
Scale down throughput four times per UTC calendar day
automatically partitions, reallocates and re-partitions the data and provisions additional server capacity as the
table size grows or
provisioned throughput is increased
Global Secondary indexes (GSI)
can be created upfront or added later
Each DynamoDB table is automatically stored in the three geographically distributed locations for durability
Read consistency represents the manner and timing in which the successful write or update of a data item is reflected in a subsequent read operation of that same item
DynamoDB allows user to specify whether the read should be eventually consistent or strongly consistent at the time of the request
Eventually Consistent Reads (Default)
Eventual consistency option maximizes the read throughput.
Consistency across all copies is usually reached within a second
However, an eventually consistent read might not reflect the results of a recently completed write.
Repeating a read after a short time should return the updated data.
Strongly Consistent Reads
Strongly consistent read returns a result that reflects all writes that received a successful response prior to the read
Query, GetItem, and BatchGetItem operations perform eventually consistent reads by default
Query and GetItem operations can be forced to be strongly consistent
Query operations cannot perform strongly consistent reads on Global Secondary Indexes
BatchGetItem operations can be forced to be strongly consistent on a per-table basis
Global Secondary Indexes
DynamoDB creates and maintains indexes for the primary key attributes for efficient access of data in the table, which allows applications to quickly retrieve data by specifying primary key values.
Global Secondary Indexes (GSI) are indexes that contain partition or composite partition-and-sort keys that can be different from the keys in the table on which the index is based.
Global secondary index is considered “global” because queries on the index can span all items in a table, across all partitions.
Multiple secondary indexes can be created on a table, and queries issued against these indexes.
Applications benefit from having one or more secondary keys available to allow efficient access to data with attributes other than the primary key.
GSIs support non-unique attributes, which increases query flexibility by enabling queries against any non-key attribute in the table
GSIs support non-unique attributes, which increases query flexibility by enabling queries against any non-key attribute in the table
GSIs support eventual consistency. DynamoDB automatically handles item additions, updates and deletes in a GSI when corresponding changes are made to the table asynchronously
Data in a secondary index consists of GSI alternate key, primary key and attributes that are projected, or copied, from the table into the index.
Attributes that are part of an item in a table, but not part of the GSI key, primary key of the table, or projected attributes are not returned on querying the GSI index
GSIs manage throughput independently of the table they are based on and the provisioned throughput for the table and each associated GSI needs to be specified at creation time
Local Secondary Indexes
Local secondary index are indexes that has the same partition key as the table, but a different sort key.
Local secondary index is “local” cause every partition of a local secondary index is scoped to a table partition that has the same partition key.
LSI allows search using a secondary index in place of the sort key, thus expanding the number of attributes that can be used for queries which can be conducted efficiently
LSI are updated automatically when the primary index is updated and reads support both strong and eventually consistent options
LSIs can only be queried via the Query API
LSIs cannot be added to existing tables at this time
LSIs cannot be modified once it is created at this time
LSI cannot be removed from a table once they are created at this time
LSI consumes provisioned throughput capacity as part of the table with which it is associated
DynamoDB Cross-region Replication
DynamoDB cross-region replication allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions
Writes to the table will be automatically propagated to all replicas
Cross-region replication currently supports single master mode. A single master has one master table and one or more replica tables
Read replicas are updated asynchronously as DynamoDB acknowledges a write operation as successful once it has been accepted by the master table. The write will then be propagated to each replica with a slight delay.
Cross-region replication can be helpful in scenarios
Efficient disaster recovery, in case a data center failure occurs.
Faster reads, for customers in multiple regions by delivering data faster by reading a DynamoDB table from the closest AWS data center.
Easier traffic management, to distribute the read workload across tables and thereby consume less read capacity in the master table.
Easy regional migration, by promoting a read replica to master
Live data migration, to replicate data and when the tables are in sync, switch the application to write to the destination region
Cross-region replication costing depends on
Provisioned throughput (Writes and Reads)
Storage for the replica tables.
Data Transfer across regions
Reading data from DynamoDB Streams to keep the tables in sync.
Cost of EC2 instances provisioned, depending upon the instance types and region, to host the replication process.
NOTE : Cross Region replication on DynamoDB was performed defining AWS Data Pipeline job which used EMR internally to transfer data before the DynamoDB streams and out of box cross region replication support
DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table in the last 24 hours, after which they are erased i.e. ordered sequence of the events per item are maintained however across item are not maintained
DynamoDB Streams have to be enabled on a per-table basis
DynamoDB streams can be used for multi-region replication to keep other data stores up-to-date with the latest changes to DynamoDB or to take actions based on the changes made to the table
DynamoDB Streams APIs helps developers consume updates and receive the item-level data before and after items are changed
DynamoDB Streams allows read at up to twice the rate of the provisioned write capacity of the DynamoDB table
DynamoDB Streams is designed so that every update made to the table will be represented exactly once in the stream
DynamoDB Triggers is a feature which allows execution of custom actions based on item-level updates on a DynamoDB table
DynamoDB triggers can be used in scenarios like sending notifications, updating an aggregate table, and connecting DynamoDB tables to other data sources
DynamoDB is an indexed data store
Billable Data = Raw byte data size + 100 byte per-item storage indexing overhead
Pay flat, hourly rate based on the capacity reserved as the throughput provisioned for the table
one Write Capacity Unit provides one write per second for items < 1KB in size.
one Read Capacity Unit provides one strongly consistent read (or two eventually consistent reads) per second for items < 4KB in size.
Provisioned throughput charges for every 10 units of Write Capacity and every 50 units of Read Capacity.
Significant savings over the normal price
Pay a one-time upfront fee
DynamoDB Best Practices
Keep item size small
Store metadata in DynamoDB and large BLOBs in Amazon S3
Use table per day, week, month etc for storing time series data
Use conditional or Optimistic Concurrency Control (OCC) updates
Optimistic Concurrency Control is like Optimistic locking in the RDMS
OCC is generally used in environments with low data contention, conflicts are rare and transactions can be completed without the expense of managing locks and transactions
OCC assumes that multiple transactions can frequently be completed without interfering with each other.
Transactions are executed using data resources without acquiring locks on those resources and waiting for other transaction locks to be cleared
Before a transaction is committed, it is verified if the data was modified by any other transaction. If so, it would be rollbacked and needs to be restarted with the updated data
OCC leads to higher throughput as compared to other concurrency control methods like pessimistic locking, as locking can drastically limit effective concurrency even when deadlocks are avoided
Avoid hot keys and hot partitions
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
Which of the following are use cases for Amazon DynamoDB? Choose 3 answers
Storing BLOB data.
Managing web sessions
Storing JSON documents
Storing metadata for Amazon S3 objects
Running relational joins and complex updates.
Storing large amounts of infrequently accessed data.
You are configuring your company’s application to use Auto Scaling and need to move user state information. Which of the following AWS services provides a shared data store with durability and low latency?
AWS ElastiCache Memcached (does not allow writes)
Amazon Simple Storage Service (does not provide low latency)
Amazon EC2 instance storage (not durable)
Does Dynamo DB support in-place atomic updates?
It is not defined
It does support in-place non-atomic updates
What is the maximum write throughput I can provision for a single Dynamic DB table?
1,000 write capacity units
100,000 write capacity units
Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first
10,000 write capacity units
In which of the following situations might you benefit from using DynamoDB? (Choose 2 answers)
You need fully managed database to handle highly complex queries
You need to deal with massive amount of “hot” data and require very low latency
You need a rapid ingestion of clickstream in order to collect data about user behavior
Your on-premises data center runs Oracle database, and you need to host a backup in AWS cloud
You are designing a file-sharing service. This service will have millions of files in it. Revenue for the service will come from fees based on how much storage a user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private. How do you achieve all of these goals in a way that is economical and can scale to millions of users?
Store all files in Amazon Simple Storage Service (S3). Create a bucket for each user. Store metadata in the filename of each object, and access it with LIST commands against the S3 API. (expensive and slow as it returns only 1000 items at a time)
Store all files in Amazon S3. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.
Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Use a database running in Amazon Relational Database Service (RDS) to store the metadata.(not economical with volumes)
Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded. (not economical with volumes)
A utility company is building an application that stores data coming from more than 10,000 sensors. Each sensor has a unique ID and will send a datapoint (approximately 1KB) every 10 minutes throughout the day. Each datapoint contains the information coming from the sensor as well as a timestamp. This company would like to query information coming from a particular sensor for the past week very rapidly and want to delete all the data that is older than 4 weeks. Using Amazon DynamoDB for its scalability and rapidity, how do you implement this in the most cost effective way
One table, with a primary key that is the sensor ID and a hash key that is the timestamp (Single table impacts performance)
One table, with a primary key that is the concatenation of the sensor ID and timestamp (Single table and concatenation impacts performance)
One table for each week, with a primary key that is the concatenation of the sensor ID and timestamp (Concatenation will cause queries would be slower, if at all)
One table for each week, with a primary key that is the sensor ID and a hash key that is the timestamp (Composite key with Sensor ID and timestamp would help for faster queries)
You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors, which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements?
Add an SQS queue to the ingestion layer to buffer writes to the RDS instance (RDS instance will not support data for 2 years)
Ingest data into a DynamoDB table and move old data to a Redshift cluster (Handle 10K IOPS ingestion and store data into Redshift for analysis)
Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage (Does not handle the ingestion issue)
Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS (RDS instance will not support data for 2 years)