Amazon DynamoDB Time to Live – TTL

DynamoDB Time to Live – TTL

  • DynamoDB Time to Live – TTL enables a per-item timestamp to determine when an item is no longer needed.
  • After the date and time of the specified timestamp, DynamoDB deletes the item from the table without consuming any write throughput.

  • DynamoDB TTL is provided at no extra cost and can help reduce data storage by retaining only required data.
  • Items that are deleted from the table are also removed from any local secondary index and global secondary index in the same way as a DeleteItem operation.
  • Expired items get removed from the table and indexes within about 48 hours.
  • DynamoDB Stream tracks the delete operation as a system delete, not a regular one.
  • TTL requirements
    • TTL attributes must use the Number data type. Other data types, such as String, aren’t supported.
      TTL attributes must use the epoch time format.- Be sure that the timestamp is in seconds, not milliseconds
  • TTL is useful if the stored items lose relevance after a specific time. for e.g.
    • Remove user or sensor data after a year of inactivity in an application
    • Archive expired items to an S3 data lake via DynamoDB Streams and AWS Lambda.
    • Retain sensitive data for a certain amount of time according to contractual or regulatory obligations.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company developed an application by using AWS Lambda and
    Amazon DynamoDB. The Lambda function periodically pulls data from the company’s S3 bucket based on date and time tags and inserts specific values into a DynamoDB table for further processing. The company must remove data that is older than 30 days from the DynamoDB table. Which solution will meet this requirement with the MOST operational efficiency?

    1. Update the Lambda function to add the Version attribute in the DynamoDB table. Enable TTL on the DynamoDB table to expire entries that are older than 30 days based on the TTL attribute.
    2. Update the Lambda function to add the TTL attribute in the DynamoDB table. Enable TTL on the DynamoDB table to expire entries that are older than 30 days based on the TTL attribute.
    3. Use AWS Step Functions to delete entries that are older than 30 days.
    4. Use EventBridge to schedule the Lambda function to delete entries that are older than 30 days.

References

DynamoDB_TTL_Time_to_Live

Amazon DynamoDB Backup and Restore

DynamoDB Backup and Restore

  • DynamoDB Backup and Restore provides fully automated on-demand backup, restore, and point-in-time recovery for data protection and archiving.
  • On-demand backup allows the creation of full backups of DynamoDB table for data archiving, helping you meet corporate and governmental regulatory requirements.
  • Point-in-time recovery (PITR) provides continuous backups of your DynamoDB table data. When enabled, DynamoDB maintains incremental backups of your table for the last 35 days until you explicitly turn it off.

On-demand Backups

  • DynamoDB on-demand backup helps create full backups of the tables for long-term retention, and archiving for regulatory compliance needs.
  • Backup and restore actions run with no impact on table performance or availability.
  • Backups are preserved regardless of table deletion and retained until they are explicitly deleted.
  • On-demand backups are cataloged, and discoverable.
  • On-demand backups can be created using
    • DynamoDB
      • can be used to backup and restore DynamoDB tables.
      • DynamoDB on-demand backups cannot be copied to a different account or Region.
    • AWS Backup (Recommended)
      • is a fully managed data protection service that makes it easy to centralize and automate backups across AWS services, in the cloud, and on-premises
      • provides enhanced backup features
      • can configure backup schedules & policies and monitor activity for the AWS resources and on-premises workloads in one place.
      • can copy the on-demand backups across AWS accounts and Regions,
      • encryption using an AWS KMS key that is independent of the DynamoDB table encryption key.
      • apply write-once-read-many (WORM) setting for the backups using the AWS Backup Vault Lock policy.
      • add cost allocation tags to on-demand backups, and
      • transition on-demand backups to cold storage for lower costs.

PITR – Point-In-Time Recovery

  • DynamoDB point-in-time recovery – PITR enables automatic, continuous, incremental backup of the table with per-second granularity.
  • PITR-enabled tables that were deleted can be recovered in the preceding 35 days and restored to their state just before they were deleted.
  • PITR helps protect against accidental writes and deletes.
  • PITR can back up tables with hundreds of terabytes of data with no impact on the performance or availability of the production applications.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A sysops engineer must create nightly backups of an Amazon DynamoDB table. Which backup methodology should the database specialist use to MINIMIZE management overhead?
    1. Install the AWS CLI on an Amazon EC2 instance. Write a CLI command that creates a backup of the DynamoDB table. Create a scheduled job or task that runs the command on a nightly basis.
    2. Create an AWS Lambda function that creates a backup of the DynamoDB table. Create an Amazon CloudWatch Events rule
      that runs the Lambda function on a nightly basis.
    3. Create a backup plan using AWS Backup, specify a backup frequency of every 24 hours, and give the plan a nightly backup window.
    4. Configure DynamoDB backup and restore for an on-demand backup frequency of every 24 hours.

References

DynamoDB_Backup_Restore

Amazon DynamoDB Global Tables

Amazon DynamoDB Global Tables

  • DynamoDB Global Tables is a fully managed, serverless, multi-master, active-active database.
  • Global tables provide 99.999% availability, increased application resiliency, and improved business continuity.
  • Global table’s automatic cross-region replication capability helps achieve fast, local read and write performance and regional fault tolerance for database workloads.
  • Applications can now perform reads and writes to DynamoDB in AWS regions around the world, with changes in any region propagated to every region where a table is replicated.
  • Global Tables help in building applications to advantage of data locality to reduce overall latency.
  • Global Tables supports eventual consistency & strong consistency for same region reads, but only eventual consistency for cross-region reads.
  • Global Tables replicates data among regions within a single AWS account and currently does not support cross-account access.
  • Global Tables uses the Last Write Wins approach for conflict resolution.

Global Tables Working

  • Global Table is a collection of one or more replica tables, all owned by a single AWS account.
  • A single Amazon DynamoDB global table can only have one replica table per AWS Region.
  • Each replica table stores the same set of data items, has the same table name, and the same primary key schema.
  • When an application writes data to a replica table in one Region, DynamoDB automatically replicates the writes to other replica tables in the other AWS Regions.
  • Global Tables requires DynamoDB streams enabled with New and Old image settings.

DynamoDB Global Tables vs. Aurora Global Databases

AWS Aurora Global Database vs DynamoDB Global Tables

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is building a web application on AWS. The application requires the database to support read and write operations in multiple AWS Regions simultaneously. The database also needs to propagate data changes between Regions as the changes occur. The application must be highly available and must provide a latency of single-digit milliseconds. Which solution meets these requirements?
    1. Amazon DynamoDB global tables
    2. Amazon DynamoDB streams with AWS Lambda to replicate the data
    3. An Amazon ElastiCache for Redis cluster with cluster mode enabled and multiple shards
    4. An Amazon Aurora global database

References

Amazon_DynamoDB_Global_Tables

Amazon DynamoDB Streams

Amazon DynamoDB Streams

  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table.
  • DynamoDB Streams stores the data for the last 24 hours, after which they are erased.
  • DynamoDB Streams maintains an ordered sequence of the events per item however, sequence across items is not maintained.
  • Example
    • For e.g., suppose that you have a DynamoDB table tracking high scores for a game and that each item in the table represents an individual player. If you make the following three updates in this order:
      • Update 1: Change Player 1’s high score to 100 points
      • Update 2: Change Player 2’s high score to 50 points
      • Update 3: Change Player 1’s high score to 125 points
    • DynamoDB Streams will maintain the order for Player 1 score events. However, it would not maintain order across the players. So Player 2 score event is not guaranteed between the 2 Player 1 events
  • Applications can access this log and view the data items as they appeared before and after they were modified, in near-real time.
  • DynamoDB Streams APIs help developers consume updates and receive the item-level data before and after items are changed.
  • Streams allow reads at up to twice the rate of the provisioned write capacity of the DynamoDB table.
  • Streams have to be enabled on a per-table basis. When enabled on a table, DynamoDB captures information about every modification to data items in the table.
  • Streams support Encryption at rest to encrypt the data.
  • Streams are designed for No Duplicates so that every update made to the table will be represented exactly once in the stream.
  • Streams write stream records in near-real time so that applications can consume these streams and take action based on the contents.
  • Streams can be used for multi-region replication to keep other data stores up-to-date with the latest changes to DynamoDB or to take actions based on the changes made to the table
  • Stream records can be processed using Kinesis Data Streams, Lambda, or KCL application.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An application currently writes a large number of records to a DynamoDB table in one region. There is a requirement for a secondary application to retrieve new records written to the DynamoDB table every 2 hours and process the updates accordingly. Which of the following is an ideal way to ensure that the secondary application gets the relevant changes from the DynamoDB table?
    1. Insert a timestamp for each record and then scan the entire table for the timestamp as per the last 2 hours.
    2. Create another DynamoDB table with the records modified in the last 2 hours.
    3. Use DynamoDB Streams to monitor the changes in the DynamoDB table.
    4. Transfer records to S3 which were modified in the last 2 hours.

References

DynamoDB_Streams

Amazon DynamoDB Consistency

DynamoDB Consistency

  • AWS has a Region, which is a physical location around the world where we cluster data centers, with one or more Availability Zones which are discrete data centers with redundant power, networking, and connectivity in an AWS Region.
  • Amazon automatically stores each DynamoDB table in the three geographically distributed locations or AZs for durability.
  • DynamoDB consistency represents the manner and timing in which the successful write or update of a data item is reflected in a subsequent read operation of that same item.

DynamoDB Consistency Modes

Eventually Consistent Reads (Default)

  • Eventual consistency option maximizes the read throughput.
  • Consistency across all copies is usually reached within a second
  • However, an eventually consistent read might not reflect the results of a recently completed write.
  • Repeating a read after a short time should return the updated data.
  • DynamoDB uses eventually consistent reads, by default.

Strongly Consistent Reads

  • Strongly consistent read returns a result that reflects all writes that received a successful response prior to the read
  • Strongly consistent reads are 2x the cost of Eventually consistent reads
  • Strongly Consistent Reads come with disadvantages
    • A strongly consistent read might not be available if there is a network delay or outage. In this case, DynamoDB may return a server error (HTTP 500).
    • Strongly consistent reads may have higher latency than eventually consistent reads.
    • Strongly consistent reads are not supported on global secondary indexes.
    • Strongly consistent reads use more throughput capacity than eventually consistent reads.
  • DynamoDB allows the user to specify whether the read should be eventually consistent or strongly consistent at the time of the request
  • Read operations (such as GetItemQuery, and Scan) provide a ConsistentRead parameter, if set to true, DynamoDB uses strongly consistent reads during the operation.
  • Query, GetItem, and BatchGetItem operations perform eventually consistent reads by default.
    • Query and GetItem operations can be forced to be strongly consistent
    • Query operations cannot perform strongly consistent reads on Global Secondary Indexes
    • BatchGetItem operations can be forced to be strongly consistent on a per-table basis

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following statements is true about DynamoDB?
    1. Requests are eventually consistent unless otherwise specified.
    2. Requests are strongly consistent.
    3. Tables do not contain primary keys.
    4. None of the above
  2. How is provisioned throughput affected by the chosen consistency model when reading data from a DynamoDB table?
    1. Strongly consistent reads use the same amount of throughput as eventually consistent reads
    2. Strongly consistent reads use variable throughput depending on read activity
    3. Strongly consistent reads use more throughput than eventually consistent reads.
    4. Strongly consistent reads use less throughput than eventually consistent reads

References

DynamoDB_Read_Consistency

Amazon DynamoDB Auto Scaling

DynamoDB Auto Scaling

DynamoDB Auto Scaling

  • DynamoDB Auto Scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
  • Application Auto Scaling enables a DynamoDB table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling.
  • When the workload decreases, Application Auto Scaling decreases the throughput so that you don’t pay for unused provisioned capacity.

DynamoDB Auto Scaling Process

DynamoDB Auto Scaling

  1. Application Auto Scaling policy can be created on the DynamoDB table.
  2. DynamoDB publishes consumed capacity metrics to CloudWatch.
  3. If the table’s consumed capacity exceeds the target utilization (or falls below the target) for a specific length of time, CloudWatch triggers an alarm. You can view the alarm on the console and receive notifications using Simple Notification Service – SNS.
    1. The upper threshold alarm is triggered when consumed reads or writes breach the target utilization percent for two consecutive minutes.
    2. The lower threshold alarm is triggered after traffic falls below the target utilization minus 20 percent for 15 consecutive minutes.
  4. CloudWatch alarm invokes Application Auto Scaling to evaluate the scaling policy.
  5. Application Auto Scaling issues an UpdateTable request to adjust the table’s provisioned throughput.
  6. DynamoDB processes the UpdateTable request, dynamically increasing (or decreasing) the table’s provisioned throughput capacity so that it approaches your target utilization.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An application running on Amazon EC2 instances writes data synchronously to an Amazon DynamoDB table configured for 60 write capacity units. During normal operation, the application writes 50KB/s to the table but can scale up to 500 KB/s during peak hours. The application is currently getting throttling errors from the DynamoDB table during peak hours. What is the MOST cost-effective change to support the increased traffic with minimal changes to the application?
    1. Use Amazon SNS to manage the write operations to the DynamoDB table
    2. Change DynamoDB table configuration to 600 write capacity units
    3. Increase the number of Amazon EC2 instances to support the traffic
    4. Configure Amazon DynamoDB Auto Scaling to handle the extra demand

References

DynamoDB_Auto_Scaling

AWS Certified Security – Specialty (SCS-C02) Exam Learning Path

AWS Security - Specialty SCS-C02 Certificate

AWS Certified Security – Specialty (SCS-C02) Exam Learning Path

I recently re-certified the updated AWS Certified Security – Specialty (SCS-C02) certification exam. The format and domains are pretty much the same as SCS-C01, however it has been enhanced to cover all the latest services.

AWS Certified Security – Specialty (SCS-C02) Exam Content

  • AWS Certified Security – Specialty (SCS-C02) exam focuses on the AWS Security and Compliance concepts. It basically validates
    • An understanding of specialized data classifications and AWS data protection mechanisms.
    • An understanding of data-encryption methods and AWS mechanisms to implement them.
    • An understanding of secure Internet protocols and AWS mechanisms to implement them.
    • The ability to make tradeoff decisions with regard to cost, security, and deployment complexity to meet a set of application requirements.
    • An understanding of security operations and risks

Refer to AWS Certified Security – Speciality Exam Guide

AWS Security - Specialty SCS-C02 Domains
  • SCS-C02 has added a new domain
    • Domain 6: Management and Security Governance with 14% coverage.
  • SCS-C02 has reduced the % of domains
    • Domain 3: Infrastructure Security (⬇︎ 6%),
    • Domain 4: Identity and Access Management (⬇︎ 2%),
    • Domain 5: Data Protection (⬇︎ 6%)

AWS Certified Security – Specialty (SCS-C02) Exam Summary

  • Specialty exams are tough, lengthy, and tiresome. Most of the questions and answers options have a lot of prose and a lot of reading that needs to be done, so be sure you are prepared and manage your time well.
  • SCS-C02 exam has 65 questions to be solved in 170 minutes which gives you roughly 2 1/2 minutes to attempt each question.
  • SCS-C02 exam includes two types of questions, multiple-choice and multiple-response.
  • SCS-C02 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 750.
  • Associate exams currently cost $ 300 + tax.
  • You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
  • As always, mark the questions for review, move on, and come back to them after you are done with all.
  • As always, having a rough architecture or mental picture of the setup helps focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.
  • AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
  • Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.

AWS Certified Security – Specialty (SCS-C02) Exam Resources

AWS Certified Security – Specialty (SCS-C02) Exam Topics

  • AWS Certified Security – Specialty (SCS-C02) exam focuses a lot on Security and compliance concepts involving Data Encryption at rest or in transit, Data protection, Auditing, Compliance and regulatory requirements, and automated remediation.

Security, Identity & Compliance

  • Identity and Access Management (IAM)
    • IAM Roles to grant the service, users temporary access to AWS services.
      • IAM Role can be used to give cross-account access and usually involves creating a role within the trusting account with a trust and permission policy and granting the user in the trusted account permissions to assume the trusting account role.
    • Identity Providers & Federation to grant external user identity (SAML or Open ID compatible IdPs) permissions to AWS resources without having to be created within the AWS account.
    • IAM Policies help define who has access & what actions can they perform.
  • Deep dive into Key Management Service (KMS). There would be quite a few questions on this.
    • is a managed encryption service that allows the creation and control of encryption keys to enable data encryption.
    • uses Envelope Encryption which uses a master key to encrypt the data key, which is then used to encrypt the data.
    • Understand how KMS works
    • Understand IAM Policies, Key Policies, Grants to grant access.
      • Key policies are the primary way to control access to KMS keys. Unless the key policy explicitly allows it, you cannot use IAM policies to allow access to a KMS key.
    • are regional, however, supports multi-region keys, which are KMS keys in different AWS Regions that can be used interchangeably – as though you had the same key in multiple Regions.
    • KMS Multi-region keys
      • are AWS KMS keys in different AWS Regions that can be used interchangeably – as though having the same key in multiple Regions.
      • are not global and each multi-region key needs to be replicated and managed independently.
    • Understand the difference between CMK with generated and imported key material esp. in rotating keys
    • KMS usage with VPC Endpoint which ensures the communication between the VPC and KMS is conducted entirely within the AWS network.
    • KMS ViaService condition
  • Cloud HSM
    • is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud
  • AWS Certificate Manager (ACM)
    • helps provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services
    • to use an ACM Certificate with CloudFront, the certificate must be imported into the US East (N. Virginia) region.
    • is regional and you need to request certificates in all regions and associate individually in all regions.
    • does not support EC2 instances and private keys cannot be exported.
  • AWS Secrets Manager
    • protects secrets needed to access applications, services, etc.
    • enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle
    • supports automatic rotation of credentials for RDS, DocumentDB, etc.
  • Secrets Manager vs. Systems Manager Parameter Store
    • Secrets Manager supports automatic rotation while SSM Parameter Store does not
    • Parameter Store is cost-effective as compared to Secrets Manager.
  • AWS GuardDuty
    • is a threat detection service that continuously monitors the AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.
    • supports CloudTrail S3 data events and management event logs, DNS logs, EKS audit logs, and VPC flow logs.
  • AWS Inspector
    • is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.
  • Amazon Macie
    • is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in S3.
  • AWS Artifact is a central resource for compliance-related information that provides on-demand access to AWS’ security and compliance reports and select online agreements
  • AWS Shield & Shield Advanced
    • for DDoS protection and integrates with Route 53, CloudFront, ALB, and Global Accelerator.
  • AWS WAF
    • protects from common attack techniques like SQL injection and XSS, Conditions based include IP addresses, HTTP headers, HTTP body, and URI strings.
    • integrates with CloudFront, ALB, and API Gateway.
    • supports Web ACLs and can block traffic based on IPs, Rate limits, and specific countries as well
    • allows IP match set rules to allow/deny specific IP addresses and rate-based rules to limit the number of requests.
    • logs can be sent to the CloudWatch Logs log group, an S3 bucket, or Kinesis Data Firehose.
  • AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation.
  • AWS Network Firewall is a stateful, fully managed, network firewall and intrusion detection and prevention service (IDS/IPS) for VPCs.
  • AWS Resource Access Manager helps you securely share your resources across AWS accounts, within your organization or organizational units (OUs), and with IAM roles and users for supported resource types.
  • AWS Signer is a fully managed code-signing service to ensure the trust and integrity of your code.
  • AWS Audit Manager to map your compliance requirements to AWS usage data with prebuilt and custom frameworks and automated evidence collection.
  • AWS Cognito esp. User Pools
  • Firewall Manager helps centrally configure and manage firewall rules across the accounts and applications in AWS Organizations which includes a variety of protections, including WAF, Shield Advanced, VPC security groups, Network Firewall, and Route 53 Resolver DNS Firewall.

Networking & Content Delivery

  • Virtual Private Connect – VPC
    • Security Groups, NACLs
      • NACLs are stateless, Security groups are stateful
      • NACLs at the subnet level, Security groups at the instance level
      • NACLs need to open ephemeral ports for response traffic.
    • VPC Gateway Endpoints to provide access to S3 and DynamoDB
    • VPC Interface Endpoints or PrivateLink provide access to a variety of services like SQS, Kinesis, or Private APIs exposed through NLB.
    • VPC Peering
      • to enable communication between VPCs within the same or different regions.
      • Route tables need to be configured on either VPC for them to be able to communicate.
      • does not allow cross-region security group reference.
    • VPC Flow Logs help capture information about the IP traffic going to and from network interfaces in the VPC
    • NAT Gateway provides managed NAT service that provides better availability, higher bandwidth and requires less administrative effort.
  • Virtual Private Network – VPN & Direct Connect to establish connectivity a secured, low latency access between an on-premises data center and VPC.
    • IPSec VPN over Direct Connect to provide secure connectivity.
  • CloudFront
    • integrates with S3 to improve latency and performance.
    • provides multiple security features
    • supports encryption at rest and end-to-end encryption
      • Viewer Protocol Policy and Origin Protocol Policy to enforce HTTPS – can be configured to require that viewers use HTTPS to request the files so that connections are encrypted when CloudFront communicates with viewers.
      • Integrates with ACM and requires certs to be in the us-east-1 region
      • Underlying origin can be applied certs from ACM or issued by a third party.
    • CloudFront Origin Shield
      • helps improve the cache hit ratio and reduce the load on the origin.
      • requests from other regional caches would hit the Origin shield rather than the Origin.
      • should be placed in the regional cache and not in the edge cache
      • should be deployed to the region closer to the origin server
    • CloudFront provides Encryption at Rest
      • uses SSDs which are encrypted for edge location points of presence (POPs), and encrypted EBS volumes for Regional Edge Caches (RECs).
      • Function code and configuration are always stored in an encrypted format on the encrypted SSDs on the edge location POPs, and in other storage locations used by CloudFront.
    • Restricting access to content
  • Route 53
    • is a highly available and scalable DNS web service.
    • Resolver Query logging
      • logs the queries that originate in specified VPCs, on-premises resources that use inbound resolver or ones using outbound resolver as well as the responses to those DNS queries.
      • can be logged to CloudWatch logs, S3, and Kinesis Data Firehose
    • Route 53 DNSSEC secures DNS traffic, and helps protect a domain from DNS spoofing man-in-the-middle attacks.
  • Elastic Load Balancer
    • End to End encryption
      • can be done NLB with TCP listener as pass through and terminating SSL on the EC2 instances
      • can be done with ALB with SSL termination and using HTTPS between ALB and EC2 instances
  • Gateway Load Balancer – GWLB
    • helps deploy, scale, and manage virtual appliances, such as firewalls, IDS/IPS systems, and deep packet inspection systems.

Management & Governance Tools

  • CloudWatch
  • CloudTrail for audit and governance
    • CloudTrail can be enabled for all regions at one go and supports log file integrity validation
    • With Organizations, the trail can be configured to log CloudTrail from all accounts to a central account.
  • AWS Config
    • AWS Config rules can be used to alert for any changes and Config can be used to check the history of changes. AWS Config can also help check approved AMIs compliance
    • allows you to remediate noncompliant resources using AWS Systems Manager Automation documents.
    • AWS Config -> EventBridge -> Lambda/SNS
  • CloudTrail vs Config
    • CloudTrail provides the WHO and Config provides the WHAT.
  • Systems Manager
    • Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management. Does not support secrets rotation. Use Secrets Manager instead
    • Systems Manager Patch Manager helps select and deploy the operating system and software patches automatically across large groups of EC2 or on-premises instances
    • Systems Manager Run Command provides safe, secure remote management of your instances at scale without logging into the servers, replacing the need for bastion hosts, SSH, or remote PowerShell
    • Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
  • AWS Organizations
    • is an account management service that enables consolidating multiple AWS accounts into an organization that can be managed centrally.
    • can configure Organization Trail to centrally log all CloudTrail logs.
    • Service Control Policies
      • acts as guardrails and specifies the services and actions that users and roles can use in the accounts that the SCP affects.
      • are similar to IAM permission policies except that they don’t grant any permissions.
  • AWS Trusted Advisor
    • inspects the AWS environment to make recommendations for system performance, saving money, availability, and closing security gaps
  • CloudFormation
    • Deletion Policy to prevent, retain, or backup RDS, EBS Volumes
    • Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update. Stack Policy only applies for Stack updates and not stack deletion.
    • CloudFormation Guard provides an open-source, general-purpose, policy-as-code evaluation tool.
  • Control Tower
    • to setup, govern, and secure a multi-account environment
    • strongly recommended guardrails cover EBS encryption

Storage & Databases

  • Simple Storage Service – S3
    • Undertstand S3 Security in detail
    • S3 Encryption supports both data at rest and data in transit encryption.
      • Data in transit encryption can be provided by enabling communication via SSL or using client-side encryption
      • Data at rest encryption can be provided using Server Side or Client Side encryption
      • Enforce S3 Encryption at Rest using default encryption of bucket policies
      • Enforce S3 encryption in transit using secureTransport in the S3 bucket policy
    • S3 permissions can be handled using
    • S3 Object Lock helps to store objects using a WORM model and can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
    • S3 Block Public Access provides controls across an entire AWS Account or at the individual S3 bucket level to ensure that objects never have public access, now and in the future.
    • S3 Access Points simplify data access for any AWS service or customer application that stores data in S3.
    • S3 Versioning with MFA Delete can be enabled on a bucket to ensure that data in the bucket cannot be accidentally overwritten or deleted.
    • S3 Access Analyzer monitors the access policies, ensuring that the policies provide only the intended access to your S3 resources.
  • Glacier Vault Lock helps deploy and enforce compliance controls for individual S3 Glacier vaults with a vault lock policy.
  • EBS Encryption
  • Relational Database Services – RDS
    • is a web service that makes it easier to set up, operate, and scale a relational database in the cloud.
    • supports the same encryption at rest methods as EBS
    • does not support enabling encryption after creation. Need to create a snapshot, copy the snapshot to an encrypted snapshot, and restore it as an encrypted DB.

Compute

Integration Tools

  • Know how CloudWatch integration with SNS and Lambda can help in notification (Topics are not required to be in detail)

Whitepapers and articles

On the Exam Day

  • Make sure you are relaxed and get some good night’s sleep. The exam is not tough if you are well-prepared.
  • If you are taking the AWS Online exam
    • Try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
    • The online verification process does take some time and usually, there are glitches.
    • Remember, you would not be allowed to take the take if you are late by more than 30 minutes.
    • Make sure you have your desk clear, no hand-watches, or external monitors, keep your phones away, and nobody can enter the room.

Finally, All the Best 🙂

AWS DynamoDB Accelerator – DAX

DynamoDB Accelerator - DAX

AWS DynamoDB Accelerator DAX

  • DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from ms to µs – even at millions of requests per second.
  • DAX as a managed service handles the cache invalidation, data population, or cluster management.
  • DAX provides API compatibility with DynamoDB. Therefore, it requires only minimal functional changes to use with an existing application.
  • DAX saves costs by reducing the read load (RCU) on DynamoDB.
  • DAX helps prevent hot partitions.
  • DAX is intended for high-performance read applications. As a write-through cache, DAX writes directly so that the writes are immediately reflected in the item cache.
  • DAX only supports eventual consistency and strong consistency requests are passed through to DynamoDB.
  • DAX is fault-tolerant and scalable.
  • DAX cluster has a primary node and zero or more read-replica nodes. Upon a failure for a primary node, DAX will automatically failover and elect a new primary. For scaling, add or remove read replicas.
  • DAX supports server-side encryption.
  • DAX supports encryption in transit, ensuring that all requests and responses between the application and the cluster are encrypted by TLS, and connections to the cluster can be authenticated by verification of a cluster x509 certificate.

DynamoDB Accelerator - DAX

DAX Cluster

  • DAC cluster is a logical grouping of one or more nodes that DAX manages as a unit.
  • One of the nodes in the cluster is designated as the primary node, and the other nodes (if any) are read replicas.
  • Primary Node is responsible for
    • Fulfilling application requests for cached data.
    • Handling write operations to DynamoDB.
    • Evicting data from the cache according to the cluster’s eviction policy.
  • Read replicas are responsible for
    • Fulfilling application requests for cached data.
    • Evicting data from the cache according to the cluster’s eviction policy.
  • Only the primary node writes to DynamoDB, read replicas don’t write to DynamoDB.
  • For production, it is recommended to have DAX with at least three nodes with each node placed in different Availability Zones.
  • Three nodes are required for a DAX cluster to be fault-tolerant.
  • A DAX cluster in an AWS Region can only interact with DynamoDB tables that are in the same Region.

DynamoDB Accelerator Operations

  • Eventual Read operations
    • If DAX has the item available (a cache hit), DAX returns the item without accessing DynamoDB.
    • If DAX does not have the item available (a cache miss), DAX passes the request through to DynamoDB. When it receives the response from DynamoDB, DAX returns the results to the application. But it also writes the results to the cache on the primary node.
  • Strongly Consistent Read operations
    • DAX passes the request through to DynamoDB. The results from DynamoDB are not cached in DAX. but simply returned.
    • DAX is not ideal for applications that require strongly consistent reads (or that cannot tolerate eventually consistent reads).
  • For Write operations
    • Data is first written to the DynamoDB table, and then to the DAX cluster.
    • Operation is successful only if the data is successfully written to both the table and to DAX.
    • Is not ideal for applications that are write-intensive, or that do not perform much read activity.

DynamoDB Accelerator Caches

  • DAX cluster has two distinct caches – Item cache and Query cache
  • Item cache
    • item cache to store the results from GetItem and BatchGetItem operations.
    • Item remains in the DAX item cache, subject to the Time to Live (TTL) setting and the least recently used (LRU) algorithm for the cache
    • DAX provides a write-through cache, keeping the DAX item cache consistent with the underlying DynamoDB tables.
  • Query cache
    • DAX caches the results from Query and Scan requests in its query cache.
    • Query and Scan results don’t affect the item cache at all, as the result set is saved in the query cache – not in the item cache.
    • Writes to the Item cache don’t affect the Query cache
  • Item and Query cache has a default 5 minutes TTL setting.
  • DAX assigns a timestamp to every entry it writes to the cache. The entry expires if it has remained in the cache for longer than the TTL setting
  • DAX maintains an LRU list for both Item and Query cache. LRU list tracks the item addition and last read time. If the cache becomes full, DAX evicts older items (even if they haven’t expired yet) to make room for new entries
  • LRU algorithm is always enabled for both the item and query cache and is not user-configurable.

DynamoDB Accelerator Write Strategies

Write-Through

  • DAX item cache implements a write-through policy
  • For write operations, DAX ensures that the cached item is synchronized with the item as it exists in DynamoDB.

Write-Around

  • Write-around strategy reduces write latency
  • Ideal for bulk uploads or writing large quantities of data
  • Item cache doesn’t remain in sync with the data in DynamoDB.

DynamoDB Accelerator Scenarios

  • As an in-memory cache, DAX increases performance and reduces the response times of eventually consistent read workloads by an order of magnitude from single-digit milliseconds to microseconds.
  • DAX reduces operational and application complexity by providing a managed service that is API-compatible with DynamoDB. It requires only minimal functional changes to use with an existing application.
  • For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to overprovision read capacity units.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company has setup an application in AWS that interacts with DynamoDB. DynamoDB is currently responding in milliseconds, but the application response guidelines require it to respond within microseconds. How can the performance of DynamoDB be further improved?
    1. Use ElastiCache in front of DynamoDB
    2. Use DynamoDB inbuilt caching
    3. Use DynamoDB Accelerator
    4. Use RDS with ElastiCache instead

References

AWS_DynamoDB_Accelerator

AWS Service Catalog

AWS Service Catalog

AWS Service Catalog

  • AWS Service Catalog helps centrally manage cloud resources to achieve governance at scale of the infrastructure as code (IaC) templates, written in CloudFormation or Terraform.
  • allows IT administrators to create, manage, and distribute catalogs of approved products to end users, who can then access the products they need in a personalized portal.
  • can help control which users have access to each product to enforce compliance with organizational business policies while making sure the customers can quickly deploy the cloud resources they need.
  • increases agility and reduces costs as end users can find and launch only the products they need from a controlled catalog.
  • is a regional service and Portfolios and products are a regional construct that will need to be created per region and are only visible/usable on the regions in which they were created.
  • supports VPC Endpoints to privately access Service Catalog APIs from VPC without the need for an Internet gateway, NAT gateway, or VPN connection.

AWS Service Catalog
Source: AWS

Service Catalog Portfolios and Products

  • Service Catalog portfolio is a collection of products, with configuration information that determines who can use those products and how they can use them.
  • Each Service Catalog product is based on an infrastructure-as-code (IaC) template using CloudFormation or Terraform.
  • Customized portfolios can be created for each type of user in an organization and selectively granted access to the appropriate portfolio.
  • When an administrator adds a new version of a product to a portfolio, that version is automatically available to all current portfolio users.
  • Same product can be included in multiple portfolios.
  • Portfolios can be shared with other AWS accounts and extended by applying additional constraints.

Service Catalog Access Control

  • Launch Constraint
    • provide AWS Service Catalog with the capability to perform actions on behalf of users even when those users do not have the necessary IAM permissions to perform those actions directly.
    • is an IAM Role that AWS Service Catalog assumes when an end user launches a product.
    • Service Catalog products without a launch constraint will launch and manage products using the end user’s IAM credentials; if the end user credentials are not sufficient for those activities, errors will result either in provisioning or in management activities.
  • Template Constraint
    • define rules that limit the parameter values that a user enters when launching a product
    • is applied when provisioning a new product or updating a product that is already in use.
    • applies the most restrictive constraint among all constraints applied to the portfolio and the product.
    • are not supported for Terraform configurations

Service Catalog AppRegistry

  • Service Catalog AppRegistry allows organizations to understand the application context of their AWS resources.
  • AppRegistry provides a repository for the information that describes the applications and associated resources that you use within your enterprise.
  • AppRegistry provides a single, up-to-date, definition of applications within their AWS environment.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company has several business units that want to use Amazon EC2. The company wants to require all business units to provision their EC2 instances by using only approved EC2 instance configurations. What should a SysOps administrator do to implement this requirement?
    1. Create an EC2 instance launch configuration. Allow the business units to launch EC2 instances by specifying this launch configuration in the AWS Management Console.
    2. Develop an IAM policy that limits the business units to provision EC2 instances only. Instruct the business units to launch instances by using an AWS CloudFormation template.
    3. Publish a product and launch constraint role for EC2 instances by using AWS Service Catalog. Allow the business units to perform actions in AWS Service Catalog only.
    4. Share an AWS CloudFormation template with the business units. Instruct the business units to pass a role to AWS CloudFormation to allow the service to manage EC2 instances.