AWS DynamoDB – Certification

AWS DynamoDB

  • Amazon DynamoDB is a fully managed NoSQL database service that
    • makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.
    • provides fast and predictable performance with seamless scalability
  • DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS, so they don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
  • DynamoDB tables do not have fixed schemas, and each item may have a different number of attributes.
  • DynamoDB synchronously replicates data across three facilities in an AWS Region, giving high availability and data durability.
  • DynamoDB supports fast in-place updates. A numeric attribute can be incremented or decremented in a row using a single API call
  • DynamoDB uses proven cryptographic methods to securely authenticate users and prevent unauthorized data access
  • Durability, performance, reliability, and security are built in, with SSD (solid state drive) storage and automatic 3-way replication.
  • DynamoDB Secondary indexes
    • add flexibility to the queries, without impacting performance.
    • are automatically maintained as sparse objects, items will only appear in an index if they exist in the table on which the index is defined making queries against an index very efficient
  • DynamoDB throughput and single-digit millisecond latency makes it a great fit for gaming, ad tech, mobile, and many other applications
  • ElastiCache can be used in front of DynamoDB in order to offload high amount of reads for non frequently changed data

DynamoDB Performance

  • Automatically scales horizontally
  • DynamoDB runs exclusively on Solid State Drives (SSDs).
    • SSDs help achieve the design goals of predictable low-latency response times for storing and accessing data at any scale.
    • SSDs High I/O performance enables it to serve high-scale request workloads cost efficiently, and to pass this efficiency along in low request pricing
  • Provisioned table reads and writes
    • Scale up throughput when needed
    • Scale down throughput four times per UTC calendar day
  • DynamoDB automatically partitions, reallocates and re-partitions the data and provisions additional server capacity as the
    • table size grows or
    • provisioned throughput is increased
  • Global Secondary indexes (GSI)
    • can be created upfront or added later

DynamoDB Consistency

  • Each DynamoDB table is automatically stored in the three geographically distributed locations for durability
  • Read consistency represents the manner and timing in which the successful write or update of a data item is reflected in a subsequent read operation of that same item
  • DynamoDB allows user to specify whether the read should be eventually consistent or strongly consistent at the time of the request
    • Eventually Consistent Reads (Default)
      • Eventual consistency option maximizes the read throughput.
      • Consistency across all copies is usually reached within a second
      • However, an eventually consistent read might not reflect the results of a recently completed write.
      • Repeating a read after a short time should return the updated data.
    • Strongly Consistent Reads
      • Strongly consistent read returns a result that reflects all writes that received a successful response prior to the read
  • Query, GetItem, and BatchGetItem operations perform eventually consistent reads by default
    • Query and GetItem operations can be forced to be strongly consistent
    • Query operations cannot perform strongly consistent reads on Global Secondary Indexes
    • BatchGetItem operations can be forced to be strongly consistent on a per-table basis

Global Secondary Indexes

  • DynamoDB creates and maintains indexes for the primary key attributes for efficient access of data in the table, which allows applications to quickly retrieve data by specifying primary key values.
  • Global Secondary Indexes (GSI) are indexes that contain partition or composite partition-and-sort keys that can be different from the keys in the table on which the index is based.
  • Global secondary index is considered “global” because queries on the index can span all items in a table, across all partitions.
  • Multiple secondary indexes can be created on a table, and queries issued against these indexes.
  • Applications benefit from having one or more secondary keys available to allow efficient access to data with attributes other than the primary key.
  • GSIs support non-unique attributes, which increases query flexibility by enabling queries against any non-key attribute in the table
  • GSIs support non-unique attributes, which increases query flexibility by enabling queries against any non-key attribute in the table
  • GSIs support eventual consistency. DynamoDB automatically handles item additions, updates and deletes in a GSI when corresponding changes are made to the table asynchronously
  • Data in a secondary index consists of GSI alternate key, primary key and  attributes that are projected, or copied, from the table into the index.
  • Attributes that are part of an item in a table, but not part of the GSI key, primary key of the table, or projected attributes are not returned on querying the GSI index
  • GSIs manage throughput independently of the table they are based on and the provisioned throughput for the table and each associated GSI needs to be specified at creation time

Local Secondary Indexes

  • Local secondary index are indexes that has the same partition key as the table, but a different sort key.
  • Local secondary index is “local” cause every partition of a local secondary index is scoped to a table partition that has the same partition key.
  • LSI allows search using a secondary index in place of the sort key, thus expanding the number of attributes that can be used for queries which can be conducted efficiently
  • LSI are updated automatically when the primary index is updated and reads support both strong and eventually consistent options
  • Local secondary indexes can only be queried via the Query API
  • Local secondary indexes cannot be added to existing tables at this time
  • Local secondary index cannot be modified once it is created at this time
  • LSI cannot be removed from a table once they are created at this time
  • LSI consumes provisioned throughput capacity as part of the table with which it is associated

DynamoDB GSI vs LSI

DynamoDB Cross-region Replication

  • DynamoDB cross-region replication allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions
  • Writes to the table will be automatically propagated to all replicas
  • Cross-region replication currently supports single master mode. A single master has one master table and one or more replica tables
  • Read replicas are updated asynchronously as DynamoDB acknowledges a write operation as successful once it has been accepted by the master table. The write will then be propagated to each replica with a slight delay.
  • Cross-region replication can be helpful in scenarios
    • Efficient disaster recovery, in case a data center failure occurs.
    • Faster reads, for customers in multiple regions by delivering data faster by reading a DynamoDB table from the closest AWS data center.
    • Easier traffic management, to distribute the read workload across tables and thereby consume less read capacity in the master table.
    • Easy regional migration, by promoting a read replica to master
    • Live data migration, to replicate data and when the tables are in sync, switch the application to write to the destination region
  • Cross-region replication costing depends on
    • Provisioned throughput (Writes and Reads)
    • Storage for the replica tables.
    • Data Transfer across regions
    • Reading data from DynamoDB Streams to keep the tables in sync.
    • Cost of EC2 instances provisioned, depending upon the instance types and region, to host the replication process.

DynamoDB Streams

  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table in the last 24 hours, after which they are erased i.e. ordered sequence of the events per item are maintained however across item are not maintained
  • DynamoDB Streams have to be enabled on a per-table basis
  • DynamoDB streams can be used for multi-region replication to keep other data stores up-to-date with the latest changes to DynamoDB or to take actions based on the changes made to the table
  • DynamoDB Streams APIs helps developers consume updates and receive the item-level data before and after items are changed
  • DynamoDB Streams allows read at up to twice the rate of the provisioned write capacity of the DynamoDB table
  • DynamoDB Streams is designed so that every update made to the table will be represented exactly once in the stream

DynamoDB Triggers

  • DynamoDB Triggers is a feature which allows execution of custom actions based on item-level updates on a DynamoDB table
  • DynamoDB triggers can be used in scenarios like sending notifications, updating an aggregate table, and connecting DynamoDB tables to other data sources

DynamoDB Costs

  • Index Storage
    • DynamoDB is an indexed data store
      • Billable Data = Raw byte data size + 100 byte per-item storage indexing overhead
  • Provisioned throughput
    • Pay flat, hourly rate based on the capacity reserved as the throughput provisioned for the table
    • With one Write Capacity Unit, you can perform one write per second for items < 1KB in size.
    • With one Read Capacity Unit, you can perform one strongly consistent read (or two eventually consistent reads) per second for items < 4KB in size.
    • Provisioned throughput charges for every 10 units of Write Capacity and every 50 units of Read Capacity.
  • Reserved capacity
    • Significant savings over the normal price
    • Pay a one-time upfront fee

DynamoDB Best Practices

  • Keep item size small
  • Store metadata in DynamoDB and large BLOBs in Amazon S3
  • Use table per day, week, month etc for storing time series data
  • Use conditional or Optimistic Concurrency Control (OCC) updates
    • Optimistic Concurrency Control is like Optimistic locking in the RDMS
    • OCC is generally used in environments with low data contention, conflicts are rare and transactions can be completed without the expense of managing locks and transactions
    • OCC assumes that multiple transactions can frequently be completed without interfering with each other.
    • Transactions are executed using data resources without acquiring locks on those resources and waiting for other transaction locks to be cleared
    • Before a transaction is committed, it is verified if the data was modified by any other transaction. If so, it would be rollbacked and needs to be restarted with the updated data
    • OCC leads to higher throughput as compared to other concurrency control methods like pessimistic locking, as locking can drastically limit effective concurrency even when deadlocks are avoided
  • Avoid hot keys and hot partitions

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following are use cases for Amazon DynamoDB? Choose 3 answers
    1. Storing BLOB data.
    2. Managing web sessions
    3. Storing JSON documents
    4. Storing metadata for Amazon S3 objects
    5. Running relational joins and complex updates.
    6. Storing large amounts of infrequently accessed data.
  2. You are configuring your company’s application to use Auto Scaling and need to move user state information. Which of the following AWS services provides a shared data store with durability and low latency?
    1. AWS ElastiCache Memcached (does not allow writes)
    2. Amazon Simple Storage Service (does not provide low latency)
    3. Amazon EC2 instance storage (not durable)
    4. Amazon DynamoDB
  3. Does Dynamo DB support in-place atomic updates?
    1. It is not defined
    2. No
    3. Yes
    4. It does support in-place non-atomic updates
  4. What is the maximum write throughput I can provision for a single Dynamic DB table?
    1. 1,000 write capacity units
    2. 100,000 write capacity units
    3. Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first
    4. 10,000 write capacity units
  5. In which of the following situations might you benefit from using DynamoDB? (Choose 2 answers)
    1. You need fully managed database to handle highly complex queries
    2. You need to deal with massive amount of “hot” data and require very low latency
    3. You need a rapid ingestion of clickstream in order to collect data about user behavior
    4. Your on-premises data center runs Oracle database, and you need to host a backup in AWS cloud
  6. You are designing a file-sharing service. This service will have millions of files in it. Revenue for the service will come from fees based on how much storage a user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private. How do you achieve all of these goals in a way that is economical and can scale to millions of users?
    1. Store all files in Amazon Simple Storage Service (S3). Create a bucket for each user. Store metadata in the filename of each object, and access it with LIST commands against the S3 API. (expensive and slow as it returns only 1000 items at a time)
    2. Store all files in Amazon S3. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.
    3. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Use a database running in Amazon Relational Database Service (RDS) to store the metadata.(not economical with volumes)
    4. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded. (not economical with volumes)
  7. A utility company is building an application that stores data coming from more than 10,000 sensors. Each sensor has a unique ID and will send a datapoint (approximately 1KB) every 10 minutes throughout the day. Each datapoint contains the information coming from the sensor as well as a timestamp. This company would like to query information coming from a particular sensor for the past week very rapidly and want to delete all the data that is older than 4 weeks. Using Amazon DynamoDB for its scalability and rapidity, how do you implement this in the most cost effective way
    1. One table, with a primary key that is the sensor ID and a hash key that is the timestamp
    2. One table, with a primary key that is the concatenation of the sensor ID and timestamp
    3. One table for each week, with a primary key that is the concatenation of the sensor ID and timestamp (partition + sort key)
    4. One table for each week, with a primary key that is the sensor ID and a hash key that is the timestamp

References

AWS Storage Options – RDS, DynamoDB & Database on EC2

AWS Storage Options Whitepaper with RDS, DynamoDB & Database on EC2 Cont.

Provides a brief summary for the Ideal Use cases, Anti-Patterns and other factors for Amazon RDS, DynamoDB & Databases on EC2 storage options

Amazon RDS

  • RDS is a web service that provides the capabilities of MySQL, Oracle, MariaDB, Postgres or Microsoft SQL Server relational database as a managed, cloud-based service
  • RDS eliminates much of the administrative overhead associated with launching, managing, and scaling your own relational database on Amazon EC2 or in another computing environment.

Ideal Usage Patterns

  • RDS is a great solution for cloud-based fully-managed relational database
  • RDS is also optimal for new applications with structured data that requires more sophisticated querying and joining capabilities than that provided by Amazon’s NoSQL database offering, DynamoDB.
  • RDS provides full compatibility with the databases supported and direct access to native database engines, code and libraries and is ideal for existing applications that rely on these databases

Anti-Patterns

  • Index and query-focused data
    • If the applications don’t require advanced features such as joins and complex transactions and is more oriented toward indexing and querying data, DynamoDB would be more appropriate for this needs
  • Numerous BLOBs
    • If the application makes heavy use of files (audio files, videos, images, etc), it is a better choice to use S3 to store the objects instead of database engines Blob feature and use RDS or DynamoDB only to save the metadata
  • Automated scalability
    • RDS provides pushbutton scaling and it only scales up and has limited scale out ability. If fully-automated scaling is needed, DynamoDB may be a better choice.
  • Complete control
    • RDS does not provide admin access and does not enable the full feature set of the database engines.
    • So if the application requires complete, OS-level control of the database server with full root or admin login privileges, a self-managed database on EC2 may be a better match.
  • Other database platforms
    • RDS, at this time, provides a MySQL, Oracle, MariaDB, PostgreSQL and SQL Server databases.
    • If any other database platform (such as IBM DB2, Informix, or Sybase) is needed, it should be deployed on a self-managed database on an EC2 instance by using a relational database AMI, or by installing database software on an EC2 instance.

Performance

  • RDS Provisioned IOPS, where the IOPS can be specified when the instance is launched and is guaranteed over the life of the instance, provides a high-performance storage option designed to deliver fast, predictable, and consistent performance for I/O intensive transactional database workload

Durability and Availability

  • RDS leverages Amazon EBS volumes as its data store
  • RDS provides database backups, for enhanced durability, which are replicated across multiple AZ’s
    • Automated backups
      • If enabled, RDS will automatically perform a full daily backup of your data during the specified backup window, and will also capture DB transaction logs
    • User initiated backups
      • User can initiate backups at time and they are not deleted unless deleted explicitly by the user
  • RDS Multi AZ’s feature enhances both the durability and the availability of the database by synchronously replicating the data between a primary RDS DB instance and a standby instance in another Availability Zone, which prevents data loss,
  • RDS provides a DNS endpoint and in case of an failure on the primary, it automatically fails over to the standby instance
  • RDS also allows Read replicas for the supported databases, which are replicated asynchronously

Cost Model

  • RDS offers a tiered pricing structure, based on the size of the database instance, the deployment type (Single-AZ/Multi-AZ), and the AWS region.
  • Pricing for RDS is based on several factors: the DB instance hours (per hour), the amount of provisioned database storage (per GB-month and per million I/O requests), additional backup storage (per GB-month), and data transfer in/out (per GB per month)

Scalability and Elasticity

  • RDS resources can be scaled elastically in several dimensions: database storage size, database storage IOPS rate, database instance compute capacity, and the number of read replicas
  • RDS supports “pushbutton scaling” of both database storage and compute resources. Additional storage can either be added immediately or during the next maintenance cycle
  • RDS for MySQL also enables you to scale out beyond the capacity of a single database deployment for read-heavy database workloads by creating one or more read replicas.
  • Multiple RDS instances can also be configured to leverage database partitioning or sharding to spread the workload over multiple DB instances, achieving even greater database scalability and elasticity.

Interfaces

  • RDS APIs and the AWS Management Console provide a management interface that allows you to create, delete, modify, and terminate RDS DB instances; to create DB snapshots; and to perform point-in-time restores
  • There is no AWS data API for Amazon RDS.
  • Once a database is created, RDS provides a DNS endpoint for the database which can be used to connect to the database.
  • Endpoint does not change over the lifetime of the instance even during the failover in case of Multi-AZ configuration

Amazon DynamoDB

  • Amazon DynamoDB is a fast, fully-managed NoSQL database service that makes it simple and cost-effective to store and retrieve any amount of data, and serve any level of request traffic.
  • DynamoDB being a managed service helps offload the administrative burden of operating and scaling a highly-available distributed database cluster.
  • DynamoDB helps meet the latency and throughput requirements of highly demanding applications by providing extremely fast and predictable performance with seamless throughput and storage scalability.
  • DynamoDB provides both eventually-consistent reads (by default), and strongly-consistent reads (optional), as well as implicit item-level transactions for item put, update, delete, conditional operations, and increment/decrement.
  • Amazon DynamoDB handles the data as below :-
    • DynamoDB stores structured data in tables, indexed by primary key, and allows low-latency read and write access to items.
    • DynamoDB supports three data types: number, string, and binary, in both scalar and multi-valued sets.
    • Tables do not have a fixed schema, so each data item can have a different number of attributes.
    • Primary key can either be a single-attribute hash key or a composite hash-range key.
    • Local secondary indexes provide additional flexibility for querying against attributes other than the primary key.

Ideal Usage Patterns

  • DynamoDB is ideal for existing or new applications that need a flexible NoSQL database with low read and write latencies, and the ability to scale storage and throughput up or down as needed without code changes or downtime.
  • Use cases require a highly available and scalable database because downtime or performance degradation has an immediate negative impact on an organization’s business. for e.g. mobile apps, gaming, digital ad serving, live voting and audience interaction for live events, sensor networks, log ingestion, access control for web-based content, metadata storage for S3 objects, e-commerce shopping carts, and web session management

Anti-Patterns

  • Structured data with Join and/or Complex Transactions
    • If the application uses structured data and required joins, complex transactions or other relationship infrastructure provided by traditional database platforms, it is better to use RDS or Database installed on an EC2 instance
  • Large Blob data
    • If the application uses large blob data for e.g. media, files, videos etc., it is better to use S3 to store the objects and use DynamoDB to store metadata for e.g. name, size, content-type etc
  • Large Objects with Low I/O rate
    • DynamoDB uses SSD drives and is optimized for workloads with a high I/O rate per GB stored. If the applications stores very large amounts of data that are infrequently accessed, S3 might be a better choice
  • Prewritten application with databases
    • For Porting an existing application using databases, RDS or database installed on the EC2 instance would be a better and seamless solution

Performance

  • SSDs and limited indexing on attributes provides high throughput and low latency and drastically reduces the cost of read and write operations.
  • Predictable performance can be achieved by defining the provisioned throughput capacity required for a given table.
  • DynamoDB handles the provisioning of resources to achieve the requested throughput rate, taking away the burden to think about instances, hardware, memory, and other factors that can affect an application’s throughput rate.
  • Provisioned throughput capacity reservations are elastic and can be increased or decreased on demand.

Durability and Availability

  • DynamoDB has built-in fault tolerance that automatically and synchronously replicates data across three AZ’s in a region for high availability and to help protect data against individual machine, or even facility failures.

Cost Model

  • DynamoDB has three pricing components: provisioned throughput capacity (per hour), indexed data storage (per GB per month), data transfer in or out (per GB per month)

Scalability and Elasticity

  • DynamoDB is both highly-scalable and elastic.
  • DynamoDB provides unlimited storage capacity, and the service automatically allocates more storage as the demand increases
  • Data is automatically partitioned and re-partitioned as needed, while the use of SSDs provides predictable low-latency response times at any scale.
  • DynamoDB is also elastic, in that you can simply “dial-up” or “dial-down” the read and write capacity of a table as your needs change.

Interfaces

  • DynamoDB provides a low-level REST API, as well as higher-level SDKs in different languages
  • APIs provide both a management and data interface for Amazon DynamoDB, that enable table management (creating, listing, deleting, and obtaining metadata) and working with attributes (getting, writing, and deleting attributes; query using an index, and full scan).

Databases on EC2

  • EC2 with EBS volumes allows hosting a self managed relational database
  • Ready to use, prebuilt AMIs are also available from leading database solutions

Ideal Usage Patterns

  • Self managed database on EC2 is an ideal scenario for users whose application requires a specific traditional relational database not supported by Amazon RDS for e.g. IBM DB2, Informix, or Sybase
  • Users or applications that require a maximum level of administrative control and configurability which is not provided by RDS

Anti-Patterns

  • Index and query-focused data
    • If the applications don’t require advanced features such as joins and complex transactions and is more oriented toward indexing and querying data, DynamoDB would be more appropriate for this needs
  • Numerous BLOBs
    • If the application makes heavy use of files (audio files, videos, images, and so on), it is a better choice to use S3 to store the objects instead of database engines Blob feature and use RDS or DynamoDB only to save the metadata
  • Automated scalability
    • Relational databases on EC2 leverages the scalability and elasticity of the underlying AWS platform, but this requires system administrators or DBAs to perform a manual or scripted task. If you need pushbutton scaling or fully-automated scaling, DynamoDB or RDS may be a better choice.
  • RDS supported database platforms
    • If the application using RDS supported database engine and all the features are available, RDS would be a better choice instead of self managed relational database on EC2

Performance

  • Performance depends on the size of the underlying EC2 instance, the number and configuration of the EBS volumes and the database itself
  • Performance can be increased by scaling up memory and compute resources by choosing a larger Amazon EC2 instance size.
  • For database storage, it is usually best to use EBS Provisioned IOPS volumes. To scale up I/O performance, the Provisioned IOPS can be increased, the number of EBS volumes changed, or use software RAID 0 (disk striping) across multiple EBS volumes, which will aggregate total IOPS and bandwidth.

Durability & Availability

  • As the database on EC2 uses EBS as storage, it has the same durability and availability provided by EBS and can be further enhanced by using EBS snapshots or by using third-party database backup utilities (such as Oracle’s RMAN) to store database backups in Amazon S3

Cost Model

  • Cost for running a database on EC2 instance is mainly determined by the size and the number of EC2 instance running, the size of the EBS volume used for database storage and any third party licensing cost for the database

Scalability & Elasticity

  • Users of traditional relational database solutions on Amazon EC2 can take advantage of the scalability and elasticity of the underlying AWS platform by creating AMI and spawning multiple instances

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following are use cases for Amazon DynamoDB? Choose 3 answers
    1. Storing BLOB data.
    2. Managing web sessions
    3. Storing JSON documents
    4. Storing metadata for Amazon S3 objects
    5. Running relational joins and complex updates.
    6. Storing large amounts of infrequently accessed data.
  2. A client application requires operating system privileges on a relational database server. What is an appropriate configuration for highly available database architecture?
    1. A standalone Amazon EC2 instance
    2. Amazon RDS in a Multi-AZ configuration
    3. Amazon EC2 instances in a replication configuration utilizing a single Availability Zone
    4. Amazon EC2 instances in a replication configuration utilizing two different Availability Zones
  3. You are developing a new mobile application and are considering storing user preferences in AWS, which would provide a more uniform cross-device experience to users using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size. Additionally 5 million customers are expected to use the application on a regular basis. The solution needs to be cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements?
    1. Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to manage security and access credentials
    2. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the DynamoDB table. Utilize STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access (DynamoDB provides high availability as it synchronously replicates data across three facilities within an AWS Region and scalability as it is designed to scale its provisioned throughput up or down while still remaining available. Also suitable for storing user preference data)
    3. Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials.
    4. Store the user preference data in S3 Setup a DynamoDB table with an item for each user and an item attribute pointing to the user’ S3 object. The mobile application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.
  4. A customer is running an application in US-West (Northern California) region and wants to setup disaster recovery failover to the Asian Pacific (Singapore) region. The customer is interested in achieving a low Recovery Point Objective (RPO) for an Amazon RDS multi-AZ MySQL database instance. Which approach is best suited to this need?
    1. Synchronous replication
    2. Asynchronous replication
    3. Route53 health checks
    4. Copying of RDS incremental snapshots
  5. You are designing a file -sharing service. This service will have millions of files in it. Revenue for the service will come from fees based on how much storage a user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private. How do you achieve all of these goals in a way that is economical and can scale to millions of users?
    1. Store all files in Amazon Simple Storage Service (53). Create a bucket for each user. Store metadata in the filename of each object, and access it with LIST commands against the S3 API.
    2. Store all files in Amazon 53. Create Amazon DynamoDB tables for the corresponding key -value pairs on the associated metadata, when objects are uploaded.
    3. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Use a database running in Amazon Relational Database Service (RDS) to store the metadata.
    4. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.