AWS Redshift Advanced

AWS Redshift Advanced

  • Redshift Distribution Style determines how data is distributed across compute nodes and helps minimize the impact of the redistribution step by locating the data where it needs to be before the query is executed.
  • Redshift enhanced VPC routing forces all COPY and UNLOAD traffic between the cluster and the data repositories through the VPC.
  • Redshift workload management (WLM) enables users to flexibly manage priorities within workloads so that short, fast-running queries won’t get stuck in queues behind long-running queries.
  • Redshift Spectrum helps query and retrieve structured and semistructured data from files in S3 without having to load the data into Redshift tables.
  • Redshift Federated Query feature allows querying and analyzing data across operational databases, data warehouses, and data lakes.

Distribution Styles

  • Table distribution style determines how data is distributed across compute nodes and helps minimize the impact of the redistribution step by locating the data where it needs to be before the query is executed.
  • Redshift supports four distribution styles; AUTO, EVEN, KEY, or ALL.

KEY distribution

  • A single column acts as a distribution key (DISTKEY) and helps place matching values on the same node slice.
  • As a rule of thumb, choose a column that:
    • Is uniformly distributed – Otherwise skew data will cause unbalances in the volume of data that will be stored in each compute node leading to undesired situations where some slices will process bigger amounts of data than others and causing bottlenecks.
    • acts as a JOIN column – for tables related to dimensions tables (star-schema), it is better to choose as DISTKEY the field that acts as the JOIN field with the larger dimension table, so that matching values from the common columns are physically stored together, reducing the amount of data that needs to be broadcasted through the network.

EVEN distribution

  • distributes the rows across the slices in a round-robin fashion, regardless of the values in any particular column
  • Choose EVEN distribution
    • when the table does not participate in joins
    • when there is not a clear choice between KEY and ALL distribution.

ALL distribution

  • Whole table is replicated in every compute node.
  • ensures that every row is collocated for every join that the table participates in.
  • ideal for relatively slow-moving tables, tables that are not updated frequently or extensively.
  • Small dimension tables DO NOT benefit significantly from ALL distribution, because the cost of redistribution is low.

AUTO distribution

  • Redshift assigns an optimal distribution style based on the size of the table data for e.g. apply ALL distribution for a small table and as it grows changes it to Even distribution
  • Amazon Redshift applies AUTO distribution, by default.

Sort Key

  • Sort keys define the order in which the data will be stored.
  • Sorting enables efficient handling of range-restricted predicates.
  • Only one sort key per table can be defined, but it can be composed of one or more columns.
  • Redshift stores columnar data in 1 MB disk blocks. The min and max values for each block are stored as part of the metadata. If the query uses a range-restricted predicate, the query processor can use the min and max values to rapidly skip over large numbers of blocks during table scans
  • The are two kinds of sort keys in Redshift: Compound and Interleaved.

Compound Keys

  • A compound key is made up of all of the columns listed in the sort key definition, in the order, they are listed.
  • A compound sort key is more efficient when query predicates use a prefix, or query’s filter applies conditions, such as filters and joins, which is a subset of the sort key columns in order.
  • Compound sort keys might speed up joins, GROUP BY and ORDER BY operations, and window functions that use PARTITION BY and ORDER BY.

Interleaved Sort Keys

  • An interleaved sort key gives equal weight to each column in the sort key, so query predicates can use any subset of the columns that make up the sort key, in any order.
  • An interleaved sort key is more efficient when multiple queries use different columns for filters.
  • Don’t use an interleaved sort key on columns with monotonically increasing attributes, such as identity columns, dates, or timestamps.
  • Use cases involve performing ad-hoc multi-dimensional analytics, which often requires pivoting, filtering, and grouping data using different columns as query dimensions.

Constraints

  • Redshift does not support Indexes.
  • Redshift supports UNIQUE, PRIMARY KEY, and FOREIGN KEY constraints, however, they are only for informational purposes.
  • Redshift does not perform integrity checks for these constraints and is used by the query planner, as hints, in order to optimize executions.
  • Redshift does enforce NOT NULL column constraints.

Redshift Enhanced VPC Routing

  • Redshift enhanced VPC routing forces all COPY and UNLOAD traffic between the cluster and the data repositories through the VPC.
  • Without enhanced VPC routing, Redshift would route traffic through the internet, including traffic to other services within the AWS network.

Redshift Workload Management

  • Redshift workload management (WLM) enables users to flexibly manage priorities within workloads so that short, fast-running queries won’t get stuck in queues behind long-running queries.
  • Redshift provides query queues, in order to manage concurrency and resource planning. Each queue can be configured with the following parameters:
    • Slots: number of concurrent queries that can be executed in this queue.
    • Working memory: percentage of memory assigned to this queue.
    • Max. Execution Time: the amount of time a query is allowed to run before it is terminated.
  • Queries can be routed to different queues using Query Groups and User Groups.
  • As a rule of thumb, it is considered a best practice to have separate queues for long running resource-intensive queries and fast queries that don’t require big amounts of memory and CPU.
  • By default, Redshift configures one queue with a concurrency level of five, which enables up to five queries to run concurrently, plus one predefined Superuser queue, with a concurrency level of one.
  • A maximum of eight queues can be defined, with each queue configured with a maximum concurrency level of 50. The maximum total concurrency level for all user-defined queues (not including the Superuser queue) is 50.
  • Redshift WLM supports two modes – Manual and Automatic
    • Automatic WLM supports queue priorities.

Redshift Short Query Acceleration – SQA

  • Short query acceleration (SQA) prioritizes selected short-running queries ahead of longer-running queries.
  • SQA runs short-running queries in a dedicated space, so that SQA queries aren’t forced to wait in queues behind longer queries.
  • SQA only prioritizes queries that are short-running and are in a user-defined queue.

Redshift Loading Data

  • A COPY command is the most efficient way to load a table.
    • COPY command is able to read from multiple data files or multiple data streams simultaneously.
    • Redshift allocates the workload to the cluster nodes and performs the load operations in parallel, including sorting the rows and distributing data across node slices.
    • COPY command supports loading data from S3, EMR, DynamoDB, and remote hosts such as EC2 instances using SSH.
    • COPY supports decryption and can decrypt the data as it performs the load if the data is encrypted
    • COPY can then speed up the load process by uncompressing the files as they are read if the data is compressed.
    • COPY command can be used with COMPUPDATE set to ON to analyze and apply compression automatically based on sample data.
    • Optimizing storage for narrow tables (multiple rows few columns) by using Single COPY command instead of multiple COPY commands, as it would not work well due to hidden fields and compression issues.
  • Auto Copy
    • Auto-copy provides the ability to automate copy statements by tracking S3 folders and ingesting new files without customer intervention
    • Without Auto-copy, a copy statement immediately starts the file ingestion process for existing files.
    • Auto-copy extends the existing copy command and provides the ability to
      • Automate file ingestion process by monitoring specified S3 paths for new files
      • Re-use copy configurations, reducing the need to create and run new copy statements for repetitive ingestion tasks and
      • Keep track of loaded files to avoid data duplication.
  • INSERT command
    • Clients can connect to Amazon Redshift using ODBC or JDBC and issue ‘insert’ SQL commands to insert the data.
    • INSERT command is much less efficient than using COPY as they are routed through the single leader node.

Redshift Resizing Cluster

  • Elastic resize
    • Use elastic resize to change the node type, number of nodes, or both. (Circa April 2020 – Changing node type is available recently and was not supported before)
    • If only the number of nodes is changed, then queries are temporarily paused and connections are held open if possible.
    • During the resize operation, the cluster is read-only.
    • Elastic resize takes 10–15 minutes.
  • Classic resize
    • Use classic resize to change the node type, number of nodes, or both.
    • During the resize operation, data is copied to a new cluster and the source cluster is read-only
    • Classic resize takes 2 hours – 2 days or longer, depending on the data’s size
  • Snapshot and restore with classic resize
    • To keep the cluster available during a classic resize, create a snapshot , make a copy of an existing cluster, then resize the new cluster.

Redshift Spectrum

  • Redshift Spectrum helps query and retrieve structured and semistructured data from files in S3 without having to load the data into Redshift tables.
  • Redshift Spectrum queries employ massive parallelism to execute very fast against large datasets. Much of the processing occurs in the Redshift Spectrum layer, and most of the data remains in S3.
  • Multiple clusters can concurrently query the same dataset in S3 without the need to make copies of the data for each cluster.
  • Redshift Spectrum resides on dedicated Redshift servers that are independent of the existing cluster.
  • Redshift Spectrum pushes many compute-intensive tasks, such as predicate filtering and aggregation, down to the Redshift Spectrum layer.
  • Redshift Spectrum also scales automatically, based on the demands of the queries, and can potentially use thousands of instances to take advantage of massively parallel processing.
  • Supports external data catalog using Glue, Athena, or Hive metastore
  • Redshift cluster and the S3 bucket must be in the same AWS Region.
  • Redshift Spectrum external tables are read-only. You can’t COPY or INSERT to an external table.

Redshift Federated Query

  • Redshift Federated Query feature allows querying and analyzing data across operational databases, warehouses, and lakes.
  • Redshift Federated Query allows integrating queries on live data in RDS for PostgreSQL and Aurora PostgreSQL with queries across Redshift and S3.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A Redshift data warehouse has different user teams that need to query the same table with very different query types. These user teams are experiencing poor performance. Which action improves performance for the user teams in this situation?
    1. Create custom table views.
    2. Add interleaved sort keys per team.
    3. Maintain team-specific copies of the table.
    4. Add support for workload management queue hopping.

AWS DynamoDB

AWS DynamoDB

  • Amazon DynamoDB is a fully managed NoSQL database service that
    • makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.
    • provides fast and predictable performance with seamless scalability
  • DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS, without having to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
  • DynamoDB tables do not have fixed schemas, and the table consists of items and each item may have a different number of attributes.
  • DynamoDB synchronously replicates data across three facilities in an AWS Region, giving high availability and data durability.
  • DynamoDB supports fast in-place updates. A numeric attribute can be incremented or decremented in a row using a single API call.
  • DynamoDB uses proven cryptographic methods to securely authenticate users and prevent unauthorized data access.
  • Durability, performance, reliability, and security are built in, with SSD (solid state drive) storage and automatic 3-way replication.
  • DynamoDB supports two different kinds of primary keys:
    • Partition Key (previously called the Hash key)
      • A simple primary key, composed of one attribute
      • The partition key value is used as input to an internal hash function; the output from the hash function determines the partition where the item will be stored.
      • No two items in a table can have the same partition key value.
    • Partition Key and Sort Key (previously called the Hash and Range key)
      • A composite primary key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key.
      • The partition key value is used as input to an internal hash function; the output from the hash function determines the partition where the item will be stored.
      • All items with the same partition key are stored together, in sorted order by sort key value.
      • The combination of the partition key and sort key must be unique.
      • It is possible for two items to have the same partition key value, but those two items must have different sort key values.
  • DynamoDB Table classes currently support
    • DynamoDB Standard table class is the default and is recommended for the vast majority of workloads.
    • DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class which is optimized for tables where storage is the dominant cost.
  • DynamoDB Throughput Capacity determines the read/write capacity for processing reads and writes on the tables and it currently supports
    • Provisioned – maximum amount of capacity in terms of reads/writes per second that an application can consume from a table or index
    • On-demand – serves thousands of requests per second without capacity planning.
  • DynamoDB Secondary indexes
    • add flexibility to the queries, without impacting performance.
    • are automatically maintained as sparse objects, items will only appear in an index if they exist in the table on which the index is defined making queries against an index very efficient
  • DynamoDB throughput and single-digit millisecond latency make it a great fit for gaming, ad tech, mobile, and many other applications
  • ElastiCache or DAX can be used in front of DynamoDB in order to offload a high amount of reads for non-frequently changed data

DynamoDB Consistency

  • Each DynamoDB table is automatically stored in the three geographically distributed locations for durability.
  • Read consistency represents the manner and timing in which the successful write or update of a data item is reflected in a subsequent read operation of that same item.
  • DynamoDB allows the user to specify whether the read should be eventually consistent or strongly consistent at the time of the request
    • Eventually Consistent Reads (Default)
      • Eventual consistency option maximizes the read throughput.
      • Consistency across all copies is usually reached within a second
      • However, an eventually consistent read might not reflect the results of a recently completed write.
      • Repeating a read after a short time should return the updated data.
      • DynamoDB uses eventually consistent reads, by default.
    • Strongly Consistent Reads
      • Strongly consistent read returns a result that reflects all writes that received a successful response prior to the read
      • Strongly consistent reads are 2x the cost of Eventually consistent reads
      • Strongly Consistent Reads come with disadvantages
        • A strongly consistent read might not be available if there is a network delay or outage. In this case, DynamoDB may return a server error (HTTP 500).
        • Strongly consistent reads may have higher latency than eventually consistent reads.
        • Strongly consistent reads are not supported on global secondary indexes.
        • Strongly consistent reads use more throughput capacity than eventually consistent reads.
  • Read operations (such as GetItemQuery, and Scan) provide a ConsistentRead parameter, if set to true, DynamoDB uses strongly consistent reads during the operation.
  • Query, GetItem, and BatchGetItem operations perform eventually consistent reads by default.
    • Query and GetItem operations can be forced to be strongly consistent
    • Query operations cannot perform strongly consistent reads on Global Secondary Indexes
    • BatchGetItem operations can be forced to be strongly consistent on a per-table basis

DynamoDB Throughput Capacity

  • DynamoDB throughput capacity depends on the read/write capacity modes for processing reads and writes on the tables.
  • DynamoDB supports two types of read/write capacity modes:
    • Provisioned – maximum amount of capacity in terms of reads/writes per second that an application can consume from a table or index
    • On-demand – serves thousands of requests per second without capacity planning.
  • DynamoDB Auto Scaling helps dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
  • DynamoDB Adaptive capacity is a feature that enables DynamoDB to run imbalanced workloads indefinitely.

DynamoDB Secondary Indexes

  • DynamoDB Secondary indexes
    • add flexibility to the queries, without impacting performance.
    • are automatically maintained as sparse objects, items will only appear in an index if they exist in the table on which the index is defined making queries against an index very efficient
  • DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
  • DynamoDB Secondary indexes support two types
    • Global secondary index – an index with a partition key and a sort key that can be different from those on the base table.
    • Local secondary index – an index that has the same partition key as the base table, but a different sort key.

DynamoDB Secondary Indexes - GSI vs LSI

DynamoDB Advanced Topics

  • DynamoDB Secondary indexes on a table allow efficient access to data with attributes other than the primary key.
  • DynamoDB Time to Live – TTL enables a per-item timestamp to determine when an item is no longer needed.
  • DynamoDB cross-region replication allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
  • DynamoDB Global Tables is a new multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table.
  • DynamoDB Triggers (just like database triggers) are a feature that allows the execution of custom actions based on item-level updates on a table.
  • DynamoDB Accelerator – DAX is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from ms to µs – even at millions of requests per second.
  • VPC Gateway Endpoints provide private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.

DynamoDB Performance

  • Automatically scales horizontally
  • runs exclusively on Solid State Drives (SSDs).
    • SSDs help achieve the design goals of predictable low-latency response times for storing and accessing data at any scale.
    • SSDs High I/O performance enables them to serve high-scale request workloads cost-efficiently and to pass this efficiency along in low request pricing.
  • allows provisioned table reads and writes
    • Scale up throughput when needed
    • Scale down throughput four times per UTC calendar day
  • automatically partitions, reallocates and re-partitions the data and provisions additional server capacity as the
    • table size grows or
    • provisioned throughput is increased
  • Global Secondary indexes (GSI)
    • can be created upfront or added later

DynamoDB Security

  • AWS handles basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery.
  • DynamoDB protects user data stored at rest and in transit between on-premises clients and DynamoDB, and between DynamoDB and other AWS resources within the same AWS Region.
  • Encryption at rest is enabled on all DynamoDB table data and cannot be disabled.
  • Encryption at rest includes the base tables, primary key, local and global secondary indexes, streams, global tables, backups, and DynamoDB Accelerator (DAX) clusters.
  • Fine-Grained Access Control (FGAC) gives a high degree of control over data in the table and helps control who (caller) can access which items or attributes of the table and perform what actions (read/write capability).
  • VPC Endpoints allow private connectivity from within a VPC only to DynamoDB.

Refer blog post @ DynamoDB Security

DynamoDB Costs

  • Index Storage
    • DynamoDB is an indexed data store
      • Billable Data = Raw byte data size + 100 byte per-item storage indexing overhead
  • Provisioned throughput
    • Pay flat, hourly rate based on the capacity reserved as the throughput provisioned for the table
    • one Write Capacity Unit provides one write per second for items < 1KB in size.
    • one Read Capacity Unit provides one strongly consistent read (or two eventually consistent reads) per second for items < 4KB in size.
    • Provisioned throughput charges for every 10 units of Write Capacity and every 50 units of Read Capacity.
  • Reserved capacity
    • Significant savings over the normal price
    • Pay a one-time upfront fee
  • DynamoDB also charges for storage, backup, replication, streams, caching, data transfer out.

DynamoDB Best Practices

Refer blog post @ DynamoDB Best Practices

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following are use cases for Amazon DynamoDB? Choose 3 answers
    1. Storing BLOB data.
    2. Managing web sessions
    3. Storing JSON documents
    4. Storing metadata for Amazon S3 objects
    5. Running relational joins and complex updates.
    6. Storing large amounts of infrequently accessed data.
  2. You are configuring your company’s application to use Auto Scaling and need to move user state information. Which of the following AWS services provides a shared data store with durability and low latency?
    1. AWS ElastiCache Memcached (does not allow writes)
    2. Amazon Simple Storage Service (does not provide low latency)
    3. Amazon EC2 instance storage (not durable)
    4. Amazon DynamoDB
  3. Does Dynamo DB support in-place atomic updates?
    1. It is not defined
    2. No
    3. Yes
    4. It does support in-place non-atomic updates
  4. What is the maximum write throughput I can provision for a single Dynamic DB table?
    1. 1,000 write capacity units
    2. 100,000 write capacity units
    3. Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first
    4. 10,000 write capacity units
  5. For a DynamoDB table, what happens if the application performs more reads or writes than your provisioned capacity?
    1. Nothing
    2. requests above the provisioned capacity will be performed but you will receive 400 error codes.
    3. requests above the provisioned capacity will be performed but you will receive 200 error codes.
    4. requests above the provisioned capacity will be throttled and you will receive 400 error codes.
  6. In which of the following situations might you benefit from using DynamoDB? (Choose 2 answers)
    1. You need fully managed database to handle highly complex queries
    2. You need to deal with massive amount of “hot” data and require very low latency
    3. You need a rapid ingestion of clickstream in order to collect data about user behavior
    4. Your on-premises data center runs Oracle database, and you need to host a backup in AWS cloud
  7. You are designing a file-sharing service. This service will have millions of files in it. Revenue for the service will come from fees based on how much storage a user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private. How do you achieve all of these goals in a way that is economical and can scale to millions of users? [PROFESSIONAL]
    1. Store all files in Amazon Simple Storage Service (S3). Create a bucket for each user. Store metadata in the filename of each object, and access it with LIST commands against the S3 API. (expensive and slow as it returns only 1000 items at a time)
    2. Store all files in Amazon S3. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.
    3. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Use a database running in Amazon Relational Database Service (RDS) to store the metadata.(not economical with volumes)
    4. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded. (not economical with volumes)
  8. A utility company is building an application that stores data coming from more than 10,000 sensors. Each sensor has a unique ID and will send a datapoint (approximately 1KB) every 10 minutes throughout the day. Each datapoint contains the information coming from the sensor as well as a timestamp. This company would like to query information coming from a particular sensor for the past week very rapidly and want to delete all the data that is older than 4 weeks. Using Amazon DynamoDB for its scalability and rapidity, how do you implement this in the most cost effective way? [PROFESSIONAL]
    1. One table, with a primary key that is the sensor ID and a hash key that is the timestamp (Single table impacts performance)
    2. One table, with a primary key that is the concatenation of the sensor ID and timestamp (Single table and concatenation impacts performance)
    3. One table for each week, with a primary key that is the concatenation of the sensor ID and timestamp (Concatenation will cause queries would be slower, if at all)
    4. One table for each week, with a primary key that is the sensor ID and a hash key that is the timestamp (Composite key with Sensor ID and timestamp would help for faster queries)
  9. You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors, which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements? [PROFESSIONAL]
    1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance (RDS instance will not support data for 2 years)
    2. Ingest data into a DynamoDB table and move old data to a Redshift cluster (Handle 10K IOPS ingestion and store data into Redshift for analysis)
    3. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage (Does not handle the ingestion issue)
    4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS (RDS instance will not support data for 2 years)
  10. Does Amazon DynamoDB support both increment and decrement atomic operations?
    1. No, neither increment nor decrement operations.
    2. Only increment, since decrement are inherently impossible with DynamoDB’s data model.
    3. Only decrement, since increment are inherently impossible with DynamoDB’s data model.
    4. Yes, both increment and decrement operations.
  11. What is the data model of DynamoDB?
    1. “Items”, with Keys and one or more Attribute; and “Attribute”, with Name and Value.
    2. “Database”, which is a set of “Tables”, which is a set of “Items”, which is a set of “Attributes”.
    3. “Table”, a collection of Items; “Items”, with Keys and one or more Attribute; and “Attribute”, with Name and Value.
    4. “Database”, a collection of Tables; “Tables”, with Keys and one or more Attribute; and “Attribute”, with Name and Value.
  12. In regard to DynamoDB, for which one of the following parameters does Amazon not charge you?
    1. Cost per provisioned write units
    2. Cost per provisioned read units
    3. Storage cost
    4. I/O usage within the same Region
  13. Which statements about DynamoDB are true? Choose 2 answers.
    1. DynamoDB uses a pessimistic locking model
    2. DynamoDB uses optimistic concurrency control
    3. DynamoDB uses conditional writes for consistency
    4. DynamoDB restricts item access during reads
    5. DynamoDB restricts item access during writes
  14. Which of the following is an example of a good DynamoDB hash key schema for provisioned throughput efficiency?
    1. User ID, where the application has many different users.
    2. Status Code where most status codes is the same.
    3. Device ID, where one is by far more popular than all the others.
    4. Game Type, where there are three possible game types.
  15. You are inserting 1000 new items every second in a DynamoDB table. Once an hour these items are analyzed and then are no longer needed. You need to minimize provisioned throughput, storage, and API calls. Given these requirements, what is the most efficient way to manage these Items after the analysis?
    1. Retain the items in a single table
    2. Delete items individually over a 24 hour period
    3. Delete the table and create a new table per hour
    4. Create a new table per hour
  16. When using a large Scan operation in DynamoDB, what technique can be used to minimize the impact of a scan on a table’s provisioned throughput?
    1. Set a smaller page size for the scan (Refer link)
    2. Use parallel scans
    3. Define a range index on the table
    4. Prewarm the table by updating all items
  17. In regard to DynamoDB, which of the following statements is correct?
    1. An Item should have at least two value sets, a primary key and another attribute.
    2. An Item can have more than one attributes
    3. A primary key should be single-valued.
    4. An attribute can have one or several other attributes.
  18. Which one of the following statements is NOT an advantage of DynamoDB being built on Solid State Drives?
    1. serve high-scale request workloads
    2. low request pricing
    3. high I/O performance of WebApp on EC2 instance (Not related to DynamoDB)
    4. low-latency response times
  19. Which one of the following operations is NOT a DynamoDB operation?
    1. BatchWriteItem
    2. DescribeTable
    3. BatchGetItem
    4. BatchDeleteItem (DeleteItem deletes a single item in a table by primary key, but BatchDeleteItem doesn’t exist)
  20. What item operation allows the retrieval of multiple items from a DynamoDB table in a single API call?
    1. GetItem
    2. BatchGetItem
    3. GetMultipleItems
    4. GetItemRange
  21. An application stores payroll information nightly in DynamoDB for a large number of employees across hundreds of offices. Item attributes consist of individual name, office identifier, and cumulative daily hours. Managers run reports for ranges of names working in their office. One query is. “Return all Items in this office for names starting with A through E”. Which table configuration will result in the lowest impact on provisioned throughput for this query? [PROFESSIONAL]
    1. Configure the table to have a hash index on the name attribute, and a range index on the office identifier
    2. Configure the table to have a range index on the name attribute, and a hash index on the office identifier
    3. Configure a hash index on the name attribute and no range index
    4. Configure a hash index on the office Identifier attribute and no range index
  22. You need to migrate 10 million records in one hour into DynamoDB. All records are 1.5KB in size. The data is evenly distributed across the partition key. How many write capacity units should you provision during this batch load?
    1. 6667
    2. 4166
    3. 5556 ( 2 write units (1 for each 1KB) * 10 million/3600 secs, refer link)
    4. 2778
  23. A meteorological system monitors 600 temperature gauges, obtaining temperature samples every minute and saving each sample to a DynamoDB table. Each sample involves writing 1K of data and the writes are evenly distributed over time. How much write throughput is required for the target table?
    1. 1 write capacity unit
    2. 10 write capacity units ( 1 write unit for 1K * 600 gauges/60 secs)
    3. 60 write capacity units
    4. 600 write capacity units
    5. 3600 write capacity units
  24. You are building a game high score table in DynamoDB. You will store each user’s highest score for each game, with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game. What’s the best DynamoDB key structure?
    1. HighestScore as the hash / only key.
    2. GameID as the hash key, HighestScore as the range key. (hash (partition) key should be the GameID, and there should be a range key for ordering HighestScore. Refer link)
    3. GameID as the hash / only key.
    4. GameID as the range / only key.
  25. You are experiencing performance issues writing to a DynamoDB table. Your system tracks high scores for video games on a marketplace. Your most popular game experiences all of the performance issues. What is the most likely problem?
    1. DynamoDB’s vector clock is out of sync, because of the rapid growth in request for the most popular game.
    2. You selected the Game ID or equivalent identifier as the primary partition key for the table. (Refer link)
    3. Users of the most popular video game each perform more read and write requests than average.
    4. You did not provision enough read or write throughput to the table.
  26. You are writing to a DynamoDB table and receive the following exception:” ProvisionedThroughputExceededException”. Though according to your Cloudwatch metrics for the table, you are not exceeding your provisioned throughput. What could be an explanation for this?
    1. You haven’t provisioned enough DynamoDB storage instances
    2. You’re exceeding your capacity on a particular Range Key
    3. You’re exceeding your capacity on a particular Hash Key (Hash key determines the partition and hence the performance)
    4. You’re exceeding your capacity on a particular Sort Key
    5. You haven’t configured DynamoDB Auto Scaling triggers
  27. Your company sells consumer devices and needs to record the first activation of all sold devices. Devices are not activated until the information is written on a persistent database. Activation data is very important for your company and must be analyzed daily with a MapReduce job. The execution time of the data analysis process must be less than three hours per day. Devices are usually sold evenly during the year, but when a new device model is out, there is a predictable peak in activation’s, that is, for a few days there are 10 times or even 100 times more activation’s than in average day. Which of the following databases and analysis framework would you implement to better optimize costs and performance for this workload? [PROFESSIONAL]
    1. Amazon RDS and Amazon Elastic MapReduce with Spot instances.
    2. Amazon DynamoDB and Amazon Elastic MapReduce with Spot instances.
    3. Amazon RDS and Amazon Elastic MapReduce with Reserved instances.
    4. Amazon DynamoDB and Amazon Elastic MapReduce with Reserved instances

References

AWS Redshift Best Practices

AWS Redshift Best Practices

Designing Tables

Distribution Style selection

  • Distribute the fact table and one dimension table on their common columns.
    • A fact table can have only one distribution key. Any tables that join on another key aren’t collocated with the fact table.
    • Choose one dimension to collocate based on how frequently it is joined and the size of the joining rows.
    • Designate both the dimension table’s primary key and the fact table’s corresponding foreign key as the DISTKEY.
  • Choose the largest dimension based on the size of the filtered dataset.
    • Only the rows that are used in the join need to be distributed, so consider the size of the dataset after filtering, not the size of the table.
  • Choose a column with high cardinality in the filtered result set.
    • If you distribute a sales table on a date column, for e.g, you should probably get fairly even data distribution, unless most of the sales are seasonal
    • However, if you commonly use a range-restricted predicate to filter for a narrow date period, most of the filtered rows occur on a limited set of slices and the query workload is skewed.
  • Change some dimension tables to use ALL distribution.
    • If a dimension table cannot be collocated with the fact table or other important joining tables, query performance can be improved significantly by distributing the entire table to all of the nodes.
    • Using ALL distribution multiplies storage space requirements and increases load times and maintenance operations.

Sort Key Selection

  • Redshift stores the data on disk in sorted order according to the sort key, which helps query optimizer to determine optimal query plans.
  • If recent data is queried most frequently, specify the timestamp column as the leading column for the sort key.
    • Queries are more efficient because they can skip entire blocks that fall outside the time range.
  • If you do frequent range filtering or equality filtering on one column, specify that column as the sort key.
    • Redshift can skip reading entire blocks of data for that column.
    • Redshift tracks the minimum and maximum column values stored on each block and can skip blocks that don’t apply to the predicate range.
  • If you frequently join a table, specify the join column as both the sort key and the distribution key.
    • Doing this enables the query optimizer to choose a sort merge join instead of a slower hash join.
    • As the data is already sorted on the join key, the query optimizer can bypass the sort phase of the sort merge join.

Other Practices

  • Automatic compression produces the best results
  • COPY command analyzes the data and applies compression encodings to an empty table automatically as part of the load operation
  • Define primary key and foreign key constraints between tables wherever appropriate. Even though they are informational only, the query optimizer uses those constraints to generate more efficient query plans.
  • Don’t use the maximum column size for convenience.

Loading Data

  • You can load data into the tables using the three following methods:
    • Using Multi-Row INSERT
    • Using Bulk INSERT
    • Using COPY command
    • Staging tables
  • Copy Command
    • COPY command loads data in parallel from S3, EMR, DynamoDB, or multiple data sources on remote hosts.
    • COPY loads large amounts of data much more efficiently than using INSERT statements, and stores the data more effectively as well.
    • Use a Single COPY Command to Load from Multiple Files
    • DON’T use multiple concurrent COPY commands to load one table from multiple files as Redshift is forced to perform a serialized load, which is much slower.
  • Split the Load Data into Multiple Files
    • divide the data in multiple files with equal size (between 1MB and 1GB)
    • number of files to be a multiple of the number of slices in the cluster
    • helps to distribute workload uniformly in the cluster.
  • Use a Manifest File
    • S3 provides eventual consistency for some operations, so it is possible that new data will not be available immediately after the upload, which could result in an incomplete data load or loading stale data.
    • Data consistency can be managed using a manifest file to load data.
    • Manifest file helps specify different S3 locations in a more efficient way that with the use of S3 prefixes.
  • Compress Data Files
    • Individually compress the load files using gzip, lzop, bzip2, or Zstandard for large datasets
    • Avoid using compression, if small amount of data because the benefit of compression would be outweighed by the processing cost of decompression
    • If the priority is to reduce the time spent by COPY commands use LZO compression. In the other hand if the priority is to reduce the size of the files in S3 and the network bandwidth use BZ2 compression.
  • Load Data in Sort Key Order
    • Load the data in sort key order to avoid needing to vacuum.
    • As long as each batch of new data follows the existing rows in the table, the data will be properly stored in sort order, and you will not need to run a vacuum.
    • Presorting rows is not needed in each load because COPY sorts each batch of incoming data as it loads.
  • Load Data using IAM role

Designing Queries

  • Avoid using select *. Include only the columns you specifically need.
  • Use a CASE Expression to perform complex aggregations instead of selecting from the same table multiple times.
  • Don’t use cross-joins unless absolutely necessary
  • Use subqueries in cases where one table in the query is used only for predicate conditions and the subquery returns a small number of rows (less than about 200).
  • Use predicates to restrict the dataset as much as possible.
  • In the predicate, use the least expensive operators that you can.
  • Avoid using functions in query predicates.
  • If possible, use a WHERE clause to restrict the dataset.
  • Add predicates to filter tables that participate in joins, even if the predicates apply the same filters.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An administrator needs to design a strategy for the schema in a Redshift cluster. The administrator needs to determine the optimal distribution style for the tables in the Redshift schema. In which two circumstances would choosing EVEN distribution be most appropriate? (Choose two.)
    1. When the tables are highly denormalized and do NOT participate in frequent joins.
    2. When data must be grouped based on a specific key on a defined slice.
    3. When data transfer between nodes must be eliminated.
    4. When a new table has been loaded and it is unclear how it will be joined to dimension.
  2. An administrator has a 500-GB file in Amazon S3. The administrator runs a nightly COPY command into a 10-node Amazon Redshift cluster. The administrator wants to prepare the data to optimize performance of the COPY command. How should the administrator prepare the data?
    1. Compress the file using gz compression.
    2. Split the file into 500 smaller files.
    3. Convert the file format to AVRO.
    4. Split the file into 10 files of equal size.