DynamoDB Time to Live – TTL
- DynamoDB Time to Live – TTL enables a per-item timestamp to determine when an item is no longer needed.
- After the date and time of the specified timestamp, DynamoDB deletes the item from the table without consuming any write throughput.
- DynamoDB TTL is provided at no extra cost and can help reduce data storage by retaining only required data.
- Items that are deleted from the table are also removed from any local secondary index and global secondary index in the same way as a DeleteItem operation.
- DynamoDB Stream tracks the delete operation as a system delete, not a regular one.
How TTL Works
- TTL allows defining a per-item expiration timestamp that indicates when an item is no longer needed.
- DynamoDB automatically deletes expired items within a few days of their expiration time, without consuming write throughput.
- Deletion Timeline: DynamoDB typically deletes expired items within two days (48 hours) of expiration.
- The exact duration depends on the workload nature and table size.
- Deletion rate is proportional to the total number of TTL-expired items.
- Items pending deletion: Expired items that haven’t been deleted yet will still appear in reads, queries, and scans.
- Use filter expressions to remove expired items from Scan and Query results.
- Expired items can still be updated, including changing or removing their TTL attributes.
- When updating expired items, use a condition expression to ensure the item hasn’t been subsequently deleted.
- TTL process runs in the background as a low-priority task to avoid impacting table performance.
- TTL deletions do not consume Write Capacity Units (WCU) in provisioned mode or Write Request Units in on-demand mode.
TTL Requirements and Limitations
- Data Type: TTL attributes must use the Number data type. Other data types, such as String, are not supported.
- Time Format: TTL attributes must use the Unix epoch time format (seconds since January 1, 1970, 00:00:00 UTC).
- Be sure that the timestamp is in seconds, not milliseconds.
- Items with TTL attributes that are not a Number type are ignored by the TTL process.
- Five-Year Past Limitation: To be considered for expiry and deletion, the TTL timestamp cannot be more than five years in the past.
- This prevents accidental deletion of historical data with very old timestamps.
- Items with TTL values older than five years in the past are ignored by the TTL process.
- Future Expiration: No limit on how far in the future the TTL timestamp can be set.
- Attribute Selection: Only one attribute per table can be designated as the TTL attribute.
TTL with Global Tables
- When using Global Tables version 2019.11.21 (Current), DynamoDB replicates TTL deletes to all replica tables.
- Write Capacity Consumption:
- The initial TTL delete does not consume WCU in the region where the TTL expiry occurs.
- The replicated TTL delete to replica table(s) consumes a replicated Write Capacity Unit (provisioned mode) or Replicated Write Unit (on-demand mode) in each replica region.
- Applicable charges apply for replicated TTL deletes.
TTL and DynamoDB Streams
- Deleted items are sent to DynamoDB Streams as system deletions (not user deletions).
- Stream records for TTL deletions include a special attribute to identify them as TTL-triggered deletions.
- Can be used to trigger downstream actions via AWS Lambda, such as:
- Archiving expired items to S3 or Amazon S3 Glacier.
- Sending notifications when items expire.
- Maintaining audit logs of deleted items.
Common Use Cases
- TTL is useful if the stored items lose relevance after a specific time. For example:
- Session Management: Remove user session data after inactivity period (e.g., 30 days).
- IoT and Sensor Data: Remove sensor data after a year of inactivity.
- Temporary Data: Delete temporary records like shopping carts, draft documents, or cache entries.
- Compliance and Data Retention: Retain sensitive data for a certain amount of time according to contractual or regulatory obligations (e.g., GDPR, HIPAA).
- Event Data: Remove event logs, audit trails, or metrics after a retention period.
- Archive to S3: Archive expired items to an S3 data lake via DynamoDB Streams and AWS Lambda before deletion.
Best Practices
- Calculate TTL on Write: Compute the expiration timestamp when creating or updating items.
- For new items:
TTL = createdAt + retention_period - For updated items:
TTL = updatedAt + retention_period
- For new items:
- Use Filter Expressions: Filter out expired items in application queries to avoid processing items pending deletion.
- Archive Before Deletion: Use DynamoDB Streams with Lambda to archive important data to S3 before TTL deletion.
- Monitor TTL Deletions: Track TTL deletion metrics using CloudWatch to ensure the deletion rate meets expectations.
- Test TTL Behavior: Use the TTL preview feature in the DynamoDB console to simulate deletions before enabling TTL.
- Avoid Very Old Timestamps: Ensure TTL values are not more than five years in the past to prevent them from being ignored.
- Consider Global Table Costs: Account for replicated write costs when using TTL with Global Tables.
Enabling TTL
- TTL can be enabled on a table through:
- AWS Management Console
- AWS CLI
- AWS SDKs
- AWS CloudFormation
- Specify the attribute name that will store the TTL timestamp.
- TTL can be enabled or disabled at any time without impacting table performance.
- Changing the TTL attribute requires disabling TTL first, then re-enabling with the new attribute.
AWS Certification Exam Practice Questions
- Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
- AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
- AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
- Open to further feedback, discussion and correction.
- A company developed an application by using AWS Lambda and Amazon DynamoDB. The Lambda function periodically pulls data from the company’s S3 bucket based on date and time tags and inserts specific values into a DynamoDB table for further processing. The company must remove data that is older than 30 days from the DynamoDB table. Which solution will meet this requirement with the MOST operational efficiency?
- Update the Lambda function to add the Version attribute in the DynamoDB table. Enable TTL on the DynamoDB table to expire entries that are older than 30 days based on the TTL attribute.
- Update the Lambda function to add the TTL attribute in the DynamoDB table. Enable TTL on the DynamoDB table to expire entries that are older than 30 days based on the TTL attribute.
- Use AWS Step Functions to delete entries that are older than 30 days.
- Use EventBridge to schedule the Lambda function to delete entries that are older than 30 days.
- A company stores IoT sensor data in a DynamoDB table. The data must be retained for 90 days for analysis and then automatically deleted. The solution must minimize costs. What should a solutions architect recommend?
- Create a Lambda function to scan and delete items older than 90 days, triggered daily by EventBridge.
- Enable TTL on the DynamoDB table with an expiration attribute set to 90 days from the item creation time.
- Use DynamoDB Streams with Lambda to move data to S3 Glacier after 90 days and delete from DynamoDB.
- Create a scheduled AWS Batch job to delete items older than 90 days.
- A DynamoDB table has TTL enabled. A developer notices that some items with expired TTL timestamps are still appearing in query results. What is the MOST likely explanation?
- TTL is not working correctly and needs to be disabled and re-enabled.
- The TTL attribute is using the wrong data type.
- Items are expired but have not yet been deleted by the background TTL process, which can take up to 48 hours.
- The TTL timestamp is more than five years in the past.
- A company uses DynamoDB Global Tables across three regions. TTL is enabled on the table. How are write capacity units consumed for TTL deletions?
- TTL deletions consume WCU in all regions including the region where expiration occurs.
- TTL deletions do not consume WCU in the region where expiration occurs, but consume replicated write units in replica regions.
- TTL deletions do not consume any WCU in any region.
- TTL deletions consume double WCU in the region where expiration occurs.
- What is the correct format for a DynamoDB TTL attribute value to expire an item on January 1, 2027, at 00:00:00 UTC?
- 2027-01-01T00:00:00Z (ISO 8601 format)
- 1735689600000 (milliseconds since epoch)
- 1735689600 (seconds since epoch)
- “1735689600” (string representation of seconds)
- A company wants to archive expired DynamoDB items to S3 before they are deleted by TTL. What is the BEST approach?
- Create a Lambda function that scans the table for expired items and copies them to S3 before TTL deletes them.
- Enable DynamoDB Streams and use a Lambda function to detect TTL deletions and archive items to S3.
- Disable TTL and use a scheduled Lambda function to manually delete items after archiving to S3.
- Use AWS Backup to automatically archive items before TTL deletion.
- Which of the following statements about DynamoDB TTL are correct? (Select TWO)
- TTL deletions consume write capacity units in the source region.
- TTL timestamps must be in Unix epoch time format in seconds.
- TTL can use String data type for the expiration attribute.
- TTL timestamps cannot be more than five years in the past to be considered for deletion.
- TTL guarantees deletion within exactly 48 hours of expiration.