AWS Redshift Advanced

AWS Redshift Advanced

  • Redshift Distribution Style determines how data is distributed across compute nodes and helps minimize the impact of the redistribution step by locating the data where it needs to be before the query is executed.
  • Redshift enhanced VPC routing forces all COPY and UNLOAD traffic between the cluster and the data repositories through the VPC.
  • Redshift workload management (WLM) enables users to flexibly manage priorities within workloads so that short, fast-running queries won’t get stuck in queues behind long-running queries.
  • Redshift Spectrum helps query and retrieve structured and semistructured data from files in S3 without having to load the data into Redshift tables.
  • Redshift Federated Query feature allows querying and analyzing data across operational databases, data warehouses, and data lakes.

Distribution Styles

  • Table distribution style determines how data is distributed across compute nodes and helps minimize the impact of the redistribution step by locating the data where it needs to be before the query is executed.
  • Redshift supports four distribution styles; AUTO, EVEN, KEY, or ALL.

KEY distribution

  • A single column acts as a distribution key (DISTKEY) and helps place matching values on the same node slice.
  • As a rule of thumb, choose a column that:
    • Is uniformly distributed – Otherwise skew data will cause unbalances in the volume of data that will be stored in each compute node leading to undesired situations where some slices will process bigger amounts of data than others and causing bottlenecks.
    • acts as a JOIN column – for tables related to dimensions tables (star-schema), it is better to choose as DISTKEY the field that acts as the JOIN field with the larger dimension table, so that matching values from the common columns are physically stored together, reducing the amount of data that needs to be broadcasted through the network.

EVEN distribution

  • distributes the rows across the slices in a round-robin fashion, regardless of the values in any particular column
  • Choose EVEN distribution
    • when the table does not participate in joins
    • when there is not a clear choice between KEY and ALL distribution.

ALL distribution

  • Whole table is replicated in every compute node.
  • ensures that every row is collocated for every join that the table participates in.
  • ideal for relatively slow-moving tables, tables that are not updated frequently or extensively.
  • Small dimension tables DO NOT benefit significantly from ALL distribution, because the cost of redistribution is low.

AUTO distribution

  • Redshift assigns an optimal distribution style based on the size of the table data for e.g. apply ALL distribution for a small table and as it grows changes it to Even distribution
  • Amazon Redshift applies AUTO distribution, by default.

Sort Key

  • Sort keys define the order in which the data will be stored.
  • Sorting enables efficient handling of range-restricted predicates.
  • Only one sort key per table can be defined, but it can be composed of one or more columns.
  • Redshift stores columnar data in 1 MB disk blocks. The min and max values for each block are stored as part of the metadata. If the query uses a range-restricted predicate, the query processor can use the min and max values to rapidly skip over large numbers of blocks during table scans
  • The are two kinds of sort keys in Redshift: Compound and Interleaved.

Compound Keys

  • A compound key is made up of all of the columns listed in the sort key definition, in the order, they are listed.
  • A compound sort key is more efficient when query predicates use a prefix, or query’s filter applies conditions, such as filters and joins, which is a subset of the sort key columns in order.
  • Compound sort keys might speed up joins, GROUP BY and ORDER BY operations, and window functions that use PARTITION BY and ORDER BY.

Interleaved Sort Keys

  • An interleaved sort key gives equal weight to each column in the sort key, so query predicates can use any subset of the columns that make up the sort key, in any order.
  • An interleaved sort key is more efficient when multiple queries use different columns for filters.
  • Don’t use an interleaved sort key on columns with monotonically increasing attributes, such as identity columns, dates, or timestamps.
  • Use cases involve performing ad-hoc multi-dimensional analytics, which often requires pivoting, filtering, and grouping data using different columns as query dimensions.

Constraints

  • Redshift does not support Indexes.
  • Redshift supports UNIQUE, PRIMARY KEY, and FOREIGN KEY constraints, however, they are only for informational purposes.
  • Redshift does not perform integrity checks for these constraints and is used by the query planner, as hints, in order to optimize executions.
  • Redshift does enforce NOT NULL column constraints.

Redshift Enhanced VPC Routing

  • Redshift enhanced VPC routing forces all COPY and UNLOAD traffic between the cluster and the data repositories through the VPC.
  • Without enhanced VPC routing, Redshift would route traffic through the internet, including traffic to other services within the AWS network.

Redshift Workload Management

  • Redshift workload management (WLM) enables users to flexibly manage priorities within workloads so that short, fast-running queries won’t get stuck in queues behind long-running queries.
  • Redshift provides query queues, in order to manage concurrency and resource planning. Each queue can be configured with the following parameters:
    • Slots: number of concurrent queries that can be executed in this queue.
    • Working memory: percentage of memory assigned to this queue.
    • Max. Execution Time: the amount of time a query is allowed to run before it is terminated.
  • Queries can be routed to different queues using Query Groups and User Groups.
  • As a rule of thumb, it is considered a best practice to have separate queues for long running resource-intensive queries and fast queries that don’t require big amounts of memory and CPU.
  • By default, Redshift configures one queue with a concurrency level of five, which enables up to five queries to run concurrently, plus one predefined Superuser queue, with a concurrency level of one.
  • A maximum of eight queues can be defined, with each queue configured with a maximum concurrency level of 50. The maximum total concurrency level for all user-defined queues (not including the Superuser queue) is 50.
  • Redshift WLM supports two modes – Manual and Automatic
    • Automatic WLM supports queue priorities.

Redshift Concurrency Scaling

  • Concurrency Scaling helps support thousands of concurrent users and concurrent queries, with consistently fast query performance.
  • With Concurrency scaling, Redshift automatically adds additional cluster capacity to process an increase in both read and write queries.
  • Queries run on the main cluster or a concurrency-scaling cluster returns the most current data.
  • Queries sent to the concurrency-scaling cluster can be managed by configuring WLM queues.

Redshift Short Query Acceleration – SQA

  • Short query acceleration (SQA) prioritizes selected short-running queries ahead of longer-running queries.
  • SQA runs short-running queries in a dedicated space, so that SQA queries aren’t forced to wait in queues behind longer queries.
  • SQA only prioritizes queries that are short-running and are in a user-defined queue.

Redshift Loading Data

  • A COPY command is the most efficient way to load a table.
    • COPY command is able to read from multiple data files or multiple data streams simultaneously.
    • Redshift allocates the workload to the cluster nodes and performs the load operations in parallel, including sorting the rows and distributing data across node slices.
    • COPY command supports loading data from S3, EMR, DynamoDB, and remote hosts such as EC2 instances using SSH.
    • COPY supports decryption and can decrypt the data as it performs the load if the data is encrypted
    • COPY can then speed up the load process by uncompressing the files as they are read if the data is compressed.
    • COPY command can be used with COMPUPDATE set to ON to analyze and apply compression automatically based on sample data.
    • Optimizing storage for narrow tables (multiple rows few columns) by using Single COPY command instead of multiple COPY commands, as it would not work well due to hidden fields and compression issues.
  • Auto Copy
    • Auto-copy provides the ability to automate copy statements by tracking S3 folders and ingesting new files without customer intervention
    • Without Auto-copy, a copy statement immediately starts the file ingestion process for existing files.
    • Auto-copy extends the existing copy command and provides the ability to
      • Automate file ingestion process by monitoring specified S3 paths for new files
      • Re-use copy configurations, reducing the need to create and run new copy statements for repetitive ingestion tasks and
      • Keep track of loaded files to avoid data duplication.
  • INSERT command
    • Clients can connect to Amazon Redshift using ODBC or JDBC and issue ‘insert’ SQL commands to insert the data.
    • INSERT command is much less efficient than using COPY as they are routed through the single leader node.

Redshift Resizing Cluster

  • Elastic resize
    • Use elastic resize to change the node type, number of nodes, or both. (Circa April 2020 – Changing node type is available recently and was not supported before)
    • If only the number of nodes is changed, then queries are temporarily paused and connections are held open if possible.
    • During the resize operation, the cluster is read-only.
    • Elastic resize takes 10–15 minutes.
  • Classic resize
    • Use classic resize to change the node type, number of nodes, or both.
    • During the resize operation, data is copied to a new cluster and the source cluster is read-only
    • Classic resize takes 2 hours – 2 days or longer, depending on the data’s size
  • Snapshot and restore with classic resize
    • To keep the cluster available during a classic resize, create a snapshot , make a copy of an existing cluster, then resize the new cluster.

Redshift Spectrum

  • Redshift Spectrum helps query and retrieve structured and semistructured data from files in S3 without having to load the data into Redshift tables.
  • Redshift Spectrum queries employ massive parallelism to execute very fast against large datasets. Much of the processing occurs in the Redshift Spectrum layer, and most of the data remains in S3.
  • Multiple clusters can concurrently query the same dataset in S3 without the need to make copies of the data for each cluster.
  • Redshift Spectrum resides on dedicated Redshift servers that are independent of the existing cluster.
  • Redshift Spectrum pushes many compute-intensive tasks, such as predicate filtering and aggregation, down to the Redshift Spectrum layer.
  • Redshift Spectrum also scales automatically, based on the demands of the queries, and can potentially use thousands of instances to take advantage of massively parallel processing.
  • Supports external data catalog using Glue, Athena, or Hive metastore
  • Redshift cluster and the S3 bucket must be in the same AWS Region.
  • Redshift Spectrum external tables are read-only. You can’t COPY or INSERT to an external table.

Redshift Federated Query

  • Redshift Federated Query feature allows querying and analyzing data across operational databases, warehouses, and lakes.
  • Redshift Federated Query allows integrating queries on live data in RDS for PostgreSQL and Aurora PostgreSQL with queries across Redshift and S3.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A Redshift data warehouse has different user teams that need to query the same table with very different query types. These user teams are experiencing poor performance. Which action improves performance for the user teams in this situation?
    1. Create custom table views.
    2. Add interleaved sort keys per team.
    3. Maintain team-specific copies of the table.
    4. Add support for workload management queue hopping.

AWS Redshift

Redshift Architecture

AWS Redshift

  • Amazon Redshift is a fully managed, fast, and powerful, petabyte-scale data warehouse service.
  • Redshift is an OLAP data warehouse solution based on PostgreSQL.
  • Redshift automatically helps
    • set up, operate, and scale a data warehouse, from provisioning the infrastructure capacity.
    • patches and backs up the data warehouse, storing the backups for a user-defined retention period.
    • monitors the nodes and drives to help recovery from failures.
    • significantly lowers the cost of a data warehouse, but also makes it easy to analyze large amounts of data very quickly
    • provide fast querying capabilities over structured and semi-structured data using familiar SQL-based clients and business intelligence (BI) tools using standard ODBC and JDBC connections.
    • uses replication and continuous backups to enhance availability and improve data durability and can automatically recover from node and component failures.
    • scale up or down with a few clicks in the AWS Management Console or with a single API call
    • distributes & parallelize queries across multiple physical resources
    • supports VPC, SSL, AES-256 encryption, and Hardware Security Modules (HSMs) to protect the data in transit and at rest.
  • Redshift supported only Single-AZ deployments before and the nodes are available within the same AZ, if the AZ supports Redshift clusters. However, Multi-AZ deployments are now supported for some types.
  • Redshift provides monitoring using CloudWatch and metrics for compute utilization, storage utilization, and read/write traffic to the cluster are available with the ability to add user-defined custom metrics
  • Redshift provides Audit logging and AWS CloudTrail integration
  • Redshift can be easily enabled to a second region for disaster recovery.

Redshift Performance

  • Massively Parallel Processing (MPP)
    • automatically distributes data and query load across all nodes.
    • makes it easy to add nodes to the data warehouse and enables fast query performance as the data warehouse grows.
  • Columnar Data Storage
    • organizes the data by column, as column-based systems are ideal for data warehousing and analytics, where queries often involve aggregates performed over large data sets
    • columnar data is stored sequentially on the storage media, and require far fewer I/Os, greatly improving query performance
  • Advance Compression
    • Columnar data stores can be compressed much more than row-based data stores because similar data is stored sequentially on a disk.
    • employs multiple compression techniques and can often achieve significant compression relative to traditional relational data stores.
    • doesn’t require indexes or materialized views and so uses less space than traditional relational database systems.
    • automatically samples the data and selects the most appropriate compression scheme, when the data is loaded into an empty table
  • Query Optimizer
    • Redshift query run engine incorporates a query optimizer that is MPP-aware and also takes advantage of columnar-oriented data storage.
  • Result Caching
    • Redshift caches the results of certain types of queries in memory on the leader node.
    • When a user submits a query, Redshift checks the results cache for a valid, cached copy of the query results. If a match is found in the result cache, Redshift uses the cached results and doesn’t run the query.
    • Result caching is transparent to the user.
  • Complied Code
    • Leader node distributes fully optimized compiled code across all of the nodes of a cluster. Compiling the query decreases the overhead associated with an interpreter and therefore increases the runtime speed, especially for complex queries.

Redshift Architecture

Redshift Architecture

  • Clusters
    • Core infrastructure component of a Redshift data warehouse
    • Cluster is composed of one or more compute nodes.
    • If a cluster is provisioned with two or more compute nodes, an additional leader node coordinates the compute nodes and handles external communication.
    • Client applications interact directly only with the leader node.
    • Compute nodes are transparent to external applications.
  • Leader node
    • Leader node manages communications with client programs and all communication with compute nodes.
    • It parses and develops execution plans to carry out database operations
    • Based on the execution plan, the leader node compiles code, distributes the compiled code to the compute nodes, and assigns a portion of the data to each compute node.
    • Leader node distributes SQL statements to the compute nodes only when a query references tables that are stored on the compute nodes. All other queries run exclusively on the leader node.
  • Compute nodes
    • Leader node compiles code for individual elements of the execution plan and assigns the code to individual compute nodes.
    • Compute nodes execute the compiled code and send intermediate results back to the leader node for final aggregation.
    • Each compute node has its own dedicated CPU, memory, and attached disk storage, which is determined by the node type.
    • As the workload grows, the compute and storage capacity of a cluster can be increased by increasing the number of nodes, upgrading the node type, or both.
  • Node slices
    • A compute node is partitioned into slices.
    • Each slice is allocated a portion of the node’s memory and disk space, where it processes a portion of the workload assigned to the node.
    • Leader node manages distributing data to the slices and apportions the workload for any queries or other database operations to the slices. The slices then work in parallel to complete the operation.
    • Number of slices per node is determined by the node size of the cluster.
    • When a table is created, one column can optionally be specified as the distribution key. When the table is loaded with data, the rows are distributed to the node slices according to the distribution key that is defined for a table.
    • A good distribution key enables Redshift to use parallel processing to load data and execute queries efficiently.
  • Managed Storage
    • Data warehouse data is stored in a separate storage tier Redshift Managed Storage (RMS).
    • RMS provides the ability to scale the storage to petabytes using S3 storage.
    • RMS enables scale, pay for compute and storage independently so that the cluster can be sized based only on the computing needs.
    • RMS automatically uses high-performance SSD-based local storage as tier-1 cache.
    • It also takes advantage of optimizations, such as data block temperature, data block age, and workload patterns to deliver high performance while scaling storage automatically to S3 when needed without requiring any action.

Redshift Serverless

  • Redshift Serverless is a serverless option of Redshift that makes it more efficient to run and scale analytics in seconds without the need to set up and manage data warehouse infrastructure.
  • Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for demanding and unpredictable workloads.
  • Redshift Serverless helps any user to get insights from data by simply loading and querying data in the data warehouse.
  • Redshift Serverless supports concurrency Scaling feature that can support unlimited concurrent users and concurrent queries, with consistently fast query performance.
  • When concurrency scaling is enabled, Redshift automatically adds cluster capacity when the cluster experiences an increase in query queuing.
  • Redshift Serverless measures data warehouse capacity in Redshift Processing Units (RPUs). RPUs are resources used to handle workloads.
  • Redshift Serverless supports workgroups and namespaces to isolate workloads and manage different resources.

Redshift Single vs Multi-Node Cluster

  • Single Node
    • Single node configuration enables getting started quickly and cost-effectively & scale up to a multi-node configuration as the needs grow
  • Multi-Node
    • Multi-node configuration requires a leader node that manages client connections and receives queries, and two or more compute nodes that store data and perform queries and computations.
    • Leader node
      • provisioned automatically and not charged for
      • receives queries from client applications, parses the queries, and develops execution plans, which are an ordered set of steps to process these queries.
      • coordinates the parallel execution of these plans with the compute nodes, aggregates the intermediate results from these nodes, and finally returns the results back to the client applications.
    • Compute node
      • can contain from 1-128 compute nodes, depending on the node type
      • executes the steps specified in the execution plans and transmits data among themselves to serve these queries.
      • intermediate results are sent back to the leader node for aggregation before being sent back to the client applications.
      • supports Dense Storage or Dense Compute nodes (DC) instance type
        • Dense Storage (DS) allows the creation of very large data warehouses using hard disk drives (HDDs) for a very low price point
        • Dense Compute (DC) allows the creation of very high-performance data warehouses using fast CPUs, large amounts of RAM and solid-state disks (SSDs)
      • direct access to compute nodes is not allowed

Redshift Multi-AZ

  • Redshift Multi-AZ deployment runs the data warehouse in multiple AWS AZs simultaneously and continues operating in unforeseen failure scenarios.
  • Multi-AZ deployment is managed as a single data warehouse with one endpoint and does not require any application changes.
  • Multi-AZ deployments support high availability requirements and reduce recovery time by guaranteeing capacity to automatically recover and are intended for customers with business-critical analytics applications that require the highest levels of availability and resiliency to AZ failures.
  • Redshift Multi-AZ supports RPO = 0 meaning data is guaranteed to be current and up to date in the event of a failure. RTO is under a minute.

Redshift Availability & Durability

  • Redshift replicates the data within the data warehouse cluster and continuously backs up the data to S3 (11 9’s durability).
  • Redshift mirrors each drive’s data to other nodes within the cluster.
  • Redshift will automatically detect and replace a failed drive or node.
  • RA3 clusters and Redshift serverless are not impacted the same way since the data is stored in S3 and the local drive is just used as a data cache.
  • If a drive fails,
    • cluster will remain available in the event of a drive failure.
    • the queries will continue with a slight latency increase while Redshift rebuilds the drive from the replica of the data on that drive which is stored on other drives within that node.
    • single node clusters do not support data replication and the cluster needs to be restored from a snapshot on S3.
  • In case of node failure(s), Redshift
    • automatically provisions new node(s) and begins restoring data from other drives within the cluster or from S3.
    • prioritizes restoring the most frequently queried data so the most frequently executed queries will become performant quickly.
    • cluster will be unavailable for queries and updates until a replacement node is provisioned and added to the cluster.
  • In case of Redshift cluster AZ goes down, Redshift
    • cluster is unavailable until power and network access to the AZ are restored
    • cluster’s data is preserved and can be used once AZ becomes available
    • cluster can be restored from any existing snapshots to a new AZ within the same region

Redshift Backup & Restore

  • Redshift always attempts to maintain at least three copies of the data – Original, Replica on the compute nodes, and a backup in S3.
  • Redshift replicates all the data within the data warehouse cluster when it is loaded and also continuously backs up the data to S3.
  • Redshift enables automated backups of the data warehouse cluster with a 1-day retention period, by default, which can be extended to max 35 days.
  • Automated backups can be turned off by setting the retention period as 0.
  • Redshift can also asynchronously replicate the snapshots to S3 in another region for disaster recovery.
  • Redshift only backs up data that has changed.
  • Restoring the backup will provision a new data warehouse cluster.

Redshift Scalability

  • Redshift allows scaling of the cluster either by
    • increasing the node instance type (Vertical scaling)
    • increasing the number of nodes (Horizontal scaling)
  • Redshift scaling changes are usually applied during the maintenance window or can be applied immediately
  • Redshift scaling process
    • existing cluster remains available for read operations only, while a new data warehouse cluster gets created during scaling operations
    • data from the compute nodes in the existing data warehouse cluster is moved in parallel to the compute nodes in the new cluster
    • when the new data warehouse cluster is ready, the existing cluster will be temporarily unavailable while the canonical name record of the existing cluster is flipped to point to the new data warehouse cluster

Redshift Security

  • Redshift supports encryption at rest and in transit
  • Redshift provides support for role-based access control – RBAC. Row-level access control helps assign one or more roles to a user and assign system and object permissions by role.
  • Reshift supports Lambda User-defined Functions – UDFs to enable external tokenization, data masking, identification or de-identification of data by integrating with vendors like Protegrity, and protect or unprotect sensitive data based on a user’s permissions and groups, in query time.
  • Redshift supports Single Sign-On SSO and integrates with other third-party corporate or other SAML-compliant identity providers.
  • Redshift supports multi-factor authentication (MFA) for additional security when authenticating to the Redshift cluster.
  • Redshift supports encrypting an unencrypted cluster using KMS. However, you can’t enable hardware security module (HSM) encryption by modifying the cluster. Instead, create a new, HSM-encrypted cluster and migrate your data to the new cluster.
  • Redshift enhanced VPC routing forces all COPY and UNLOAD traffic between the cluster and the data repositories through the VPC.

Redshift Advanced Topics

Redshift Best Practices

Refer blog post Redshift Best Practices

Redshift vs EMR vs RDS

  • RDS is ideal for
    • structured data and running traditional relational databases while offloading database administration
    • for online-transaction processing (OLTP) and for reporting and analysis
  • Redshift is ideal for
    • large volumes of structured data that needs to be persisted and queried using standard SQL and existing BI tools
    • analytic and reporting workloads against very large data sets by harnessing the scale and resources of multiple nodes and using a variety of optimizations to provide improvements over RDS
    • preventing reporting and analytic processing from interfering with the performance of the OLTP workload
  • EMR is ideal for
    • processing and transforming unstructured or semi-structured data to bring in to Amazon Redshift and
    • for data sets that are relatively transitory, not stored for long-term use.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. With which AWS services CloudHSM can be used (select 2)
    1. S3
    2. DynamoDB
    3. RDS
    4. ElastiCache
    5. Amazon Redshift
  2. You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors, which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements?
    1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance (RDS instance will not support data for 2 years)
    2. Ingest data into a DynamoDB table and move old data to a Redshift cluster (Handle 10K IOPS ingestion and store data into Redshift for analysis)
    3. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage (Does not handle the ingestion issue)
    4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS (RDS instance will not support data for 2 years)
  3. Which two AWS services provide out-of-the-box user configurable automatic backup-as-a-service and backup rotation options? Choose 2 answers
    1. Amazon S3
    2. Amazon RDS
    3. Amazon EBS
    4. Amazon Redshift
  4. Your department creates regular analytics reports from your company’s log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data?
    1. Use reduced redundancy storage (RRS) for PDF and CSV data in Amazon S3. Add Spot instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. (Spot instances impacts performance)
    2. Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability)
    3. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift (Spot instances impacts performance)
    4. Use reduced redundancy storage (RRS) for PDF and CSV data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift. (Spot instances impacts performance and Spot instance not available for Redshift)

References

AWS Database Services Cheat Sheet

AWS Database Services Cheat Sheet

AWS Database Services

Relational Database Service – RDS

  • provides Relational Database service
  • supports MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL Server, and the new, MySQL-compatible Amazon Aurora DB engine
  • as it is a managed service, shell (root ssh) access is not provided
  • manages backups, software patching, automatic failure detection, and recovery
  • supports use initiated manual backups and snapshots
  • daily automated backups with database transaction logs enables Point in Time recovery up to the last five minutes of database usage
  • snapshots are user-initiated storage volume snapshot of DB instance, backing up the entire DB instance and not just individual databases that can be restored as a independent RDS instance
  • RDS Security
    • support encryption at rest using KMS as well as encryption in transit using SSL endpoints
    • supports IAM database authentication, which prevents the need to store static user credentials in the database, because authentication is managed externally using IAM.
    • supports Encryption only during creation of an RDS DB instance
    • existing unencrypted DB cannot be encrypted and you need to create a  snapshot, created a encrypted copy of the snapshot and restore as encrypted DB
    • supports Secret Manager for storing and rotating secrets
    • for encrypted database
      • logs, snapshots, backups, read replicas are all encrypted as well
      • cross region replicas and snapshots does not work across region (Note – this is possible now with latest AWS enhancement)
  • Multi-AZ deployment
    • provides high availability and automatic failover support and is NOT a scaling solution
    • maintains a synchronous standby replica in a different AZ
    • transaction success is returned only if the commit is successful both on the primary and the standby DB
    • Oracle, PostgreSQL, MySQL, and MariaDB DB instances use Amazon technology, while SQL Server DB instances use SQL Server Mirroring
    • snapshots and backups are taken from standby & eliminate I/O freezes
    • during automatic failover, its seamless and RDS switches to the standby instance and updates the DNS record to point to standby
    • failover can be forced with the Reboot with failover option
  • Read Replicas
    • uses the PostgreSQL, MySQL, and MariaDB DB engines’ built-in replication functionality to create a separate Read Only instance
    • updates are asynchronously copied to the Read Replica, and data might be stale
    • can help scale applications and reduce read only load
    • requires automatic backups enabled
    • replicates all databases in the source DB instance
    • for disaster recovery, can be promoted to a full fledged database
    • can be created in a different region for disaster recovery, migration and low latency across regions
    • can’t create encrypted read replicas from unencrypted DB or read replica
  • RDS does not support all the features of underlying databases, and if required the database instance can be launched on an EC2 instance
  • RDS Components
    • DB parameter groups contains engine configuration values that can be applied to one or more DB instances of the same instance type for e.g. SSL, max connections etc.
    • Default DB parameter group cannot be modified, create a custom one and attach to the DB
    • Supports static and dynamic parameters
      • changes to dynamic parameters are applied immediately (irrespective of apply immediately setting)
      • changes to static parameters are NOT applied immediately and require a manual reboot.
  • RDS Monitoring & Notification
    • integrates with CloudWatch and CloudTrail
    • CloudWatch provides metrics about CPU utilization from the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance
    • Performance Insights is a database performance tuning and monitoring feature that helps illustrate the database’s performance and help analyze any issues that affect it
    • supports RDS Event Notification which uses the SNS to provide notification when an RDS event like creation, deletion or snapshot creation etc occurs

Aurora

  • is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases
  • is a managed services and handles time-consuming tasks such as provisioning, patching, backup, recovery, failure detection and repair
  • is a proprietary technology from AWS (not open sourced)
  • provides PostgreSQL and MySQL compatibility
  • is “AWS cloud optimized” and claims 5x performance improvement
    over MySQL on RDS, over 3x the performance of PostgreSQL on RDS
  • scales storage automatically in increments of 10GB, up to 64 TB with no impact to database performance. Storage is striped across 100s of volumes.
  • no need to provision storage in advance.
  • provides self-healing storage. Data blocks and disks are continuously scanned for errors and repaired automatically.
  • provides instantaneous failover
  • replicates each chunk of my the database volume six ways across three Availability Zones i.e. 6 copies of the data across 3 AZ
    • requires 4 copies out of 6 needed for writes
    • requires 3 copies out of 6 need for reads
  • costs more than RDS (20% more) – but is more efficient
  • Read Replicas
    • can have 15 replicas while MySQL has 5, and the replication process is faster (sub 10 ms replica lag)
    • share the same data volume as the primary instance in the same AWS Region, there is virtually no replication lag
    • supports Automated failover for master in less than 30 seconds
    • supports Cross Region Replication using either physical or logical replication.
  • Security
    • supports Encryption at rest using KMS
    • supports Encryption in flight using SSL (same process as MySQL or Postgres)
    • Automated backups, snapshots and replicas are also encrypted
    • Possibility to authenticate using IAM token (same method as RDS)
    • supports protecting the instance with security groups
    • does not support SSH access to the underlying servers
  • Aurora Serverless
    • provides automated database Client  instantiation and on-demand  autoscaling based on actual usage
    • provides a relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads
    • automatically starts up, shuts down, and scales capacity up or down based on the application’s needs. No capacity planning needed
    • Pay per second, can be more cost-effective
  • Aurora Global Database
    • allows a single Aurora database to span multiple AWS regions.
    • provides Physical replication, which uses dedicated infrastructure that leaves the databases entirely available to serve the application
    • supports 1 Primary Region (read / write)
    • replicates across up to 5 secondary (read-only) regions, replication lag is less than 1 second
    • supports up to 16 Read Replicas per secondary region
    • recommended for low-latency global reads and disaster recovery with an RTO of < 1 minute
    • failover is not automated and if the primary region becomes unavailable, a secondary region can be manually removed from an Aurora Global Database and promote it to take full reads and writes. Application needs to be updated to point to the newly promoted region.
  • Aurora Backtrack
    • Backtracking “rewinds” the DB cluster to the specified time
    • Backtracking performs in place restore and does not create a new instance. There is a minimal downtime associated with it.
  • Aurora Clone feature allows quick and cost-effective creation of Aurora Cluster duplicates
  • supports parallel or distributed query using Aurora Parallel Query, which refers to the ability to push down and distribute the computational load of a single query across thousands of CPUs in Aurora’s storage layer.

DynamoDB

  • fully managed NoSQL database service
  • synchronously replicates data across three facilities in an AWS Region, giving high availability and data durability
  • runs exclusively on SSDs to provide high I/O performance
  • provides provisioned table reads and writes
  • automatically partitions, reallocates, and re-partitions the data and provisions additional server capacity as data or throughput changes
  • creates and maintains indexes for the primary key attributes for efficient access to data in the table
  • DynamoDB Table classes currently support
    • DynamoDB Standard table class is the default and is recommended for the vast majority of workloads.
    • DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class which is optimized for tables where storage is the dominant cost.
  • supports Secondary Indexes
    • allows querying attributes other than the primary key attributes without impacting performance.
    • are automatically maintained as sparse objects
  • Local secondary index vs Global secondary index
    • shares partition key + different sort key vs different partition + sort key
    • search limited to partition vs across all partition
    • unique attributes vs non-unique attributes
    • linked to the base table vs independent separate index
    • only created during the base table creation vs can be created later
    • cannot be deleted after creation vs can be deleted
    • consumes provisioned throughput capacity of the base table vs independent throughput
    • returns all attributes for item vs only projected attributes
    • Eventually or Strongly vs Only Eventually consistent reads
    • size limited to 10Gb per partition vs unlimited
  • DynamoDB Consistency
    • provides Eventually consistent (by default) or Strongly Consistent option to be specified during a read operation
    • supports Strongly consistent reads for a few operations like Query, GetItem, and BatchGetItem using the ConsistentRead parameter
  • DynamoDB Throughput Capacity
    • supports On-demand and Provisioned read/write capacity modes
    • Provisioned mode requires the number of reads and writes per second as required by the application to be specified
    • On-demand mode provides flexible billing option capable of serving thousands of requests per second without capacity planning
  • DynamoDB Auto Scaling helps dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
  • DynamoDB Adaptive capacity is a feature that enables DynamoDB to run imbalanced workloads indefinitely.
  • DynamoDB Global Tables provide multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table
  • DynamoDB Time to Live (TTL)
    • enables a per-item timestamp to determine when an item expiry
    • expired items are deleted from the table without consuming any write throughput.
  • DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second.
  • DynamoDB cross-region replication
    • allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
    • using DynamoDB streams which leverages Kinesis and provides time-ordered sequence of item-level changes and can help for lower RPO, lower RTO disaster recovery
  • DynamoDB Triggers (just like database triggers) are a feature that allows the execution of custom actions based on item-level updates on a table.
  • VPC Gateway Endpoints provide private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.

ElastiCache

  • managed web service that provides in-memory caching to deploy and run Memcached or Redis protocol-compliant cache clusters
  • ElastiCache with Redis,
    • like RDS, supports Multi-AZ, Read Replicas and Snapshots
    • Read Replicas are created across AZ within same region using Redis’s asynchronous replication technology
    • Multi-AZ differs from RDS as there is no standby, but if the primary goes down a Read Replica is promoted as primary
    • Read Replicas cannot span across regions, as RDS supports
    • cannot be scaled out and if scaled up cannot be scaled down
    • allows snapshots for backup and restore
    • AOF can be enabled for recovery scenarios, to recover the data in case the node fails or service crashes. But it does not help in case the underlying hardware fails
    • Enabling Redis Multi-AZ as a Better Approach to Fault Tolerance
  • ElastiCache with Memcached
    • can be scaled up by increasing size and scaled out by adding nodes
    • nodes can span across multiple AZs within the same region
    • cached data is spread across the nodes, and a node failure will always result in some data loss from the cluster
    • supports auto discovery
    • every node should be homogenous and of same instance type
  • ElastiCache Redis vs Memcached
    • complex data objects vs simple key value storage
    • persistent vs non persistent, pure caching
    • automatic failover with Multi-AZ vs Multi-AZ not supported
    • scaling using Read Replicas vs using multiple nodes
    • backup & restore supported vs not supported
  • can be used state management to keep the web application stateless

Redshift

  • fully managed, fast and powerful, petabyte scale data warehouse service
  • uses replication and continuous backups to enhance availability and improve data durability and can automatically recover from node and component failures
  • provides Massive Parallel Processing (MPP) by distributing & parallelizing queries across multiple physical resources
  • columnar data storage improving query performance and allowing advance compression techniques
  • only supports Single-AZ deployments and the nodes are available within the same AZ, if the AZ supports Redshift clusters
  • spot instances are NOT an option

AWS Redshift Best Practices

AWS Redshift Best Practices

Designing Tables

Distribution Style selection

  • Distribute the fact table and one dimension table on their common columns.
    • A fact table can have only one distribution key. Any tables that join on another key aren’t collocated with the fact table.
    • Choose one dimension to collocate based on how frequently it is joined and the size of the joining rows.
    • Designate both the dimension table’s primary key and the fact table’s corresponding foreign key as the DISTKEY.
  • Choose the largest dimension based on the size of the filtered dataset.
    • Only the rows that are used in the join need to be distributed, so consider the size of the dataset after filtering, not the size of the table.
  • Choose a column with high cardinality in the filtered result set.
    • If you distribute a sales table on a date column, for e.g, you should probably get fairly even data distribution, unless most of the sales are seasonal
    • However, if you commonly use a range-restricted predicate to filter for a narrow date period, most of the filtered rows occur on a limited set of slices and the query workload is skewed.
  • Change some dimension tables to use ALL distribution.
    • If a dimension table cannot be collocated with the fact table or other important joining tables, query performance can be improved significantly by distributing the entire table to all of the nodes.
    • Using ALL distribution multiplies storage space requirements and increases load times and maintenance operations.

Sort Key Selection

  • Redshift stores the data on disk in sorted order according to the sort key, which helps query optimizer to determine optimal query plans.
  • If recent data is queried most frequently, specify the timestamp column as the leading column for the sort key.
    • Queries are more efficient because they can skip entire blocks that fall outside the time range.
  • If you do frequent range filtering or equality filtering on one column, specify that column as the sort key.
    • Redshift can skip reading entire blocks of data for that column.
    • Redshift tracks the minimum and maximum column values stored on each block and can skip blocks that don’t apply to the predicate range.
  • If you frequently join a table, specify the join column as both the sort key and the distribution key.
    • Doing this enables the query optimizer to choose a sort merge join instead of a slower hash join.
    • As the data is already sorted on the join key, the query optimizer can bypass the sort phase of the sort merge join.

Other Practices

  • Automatic compression produces the best results
  • COPY command analyzes the data and applies compression encodings to an empty table automatically as part of the load operation
  • Define primary key and foreign key constraints between tables wherever appropriate. Even though they are informational only, the query optimizer uses those constraints to generate more efficient query plans.
  • Don’t use the maximum column size for convenience.

Loading Data

  • You can load data into the tables using the three following methods:
    • Using Multi-Row INSERT
    • Using Bulk INSERT
    • Using COPY command
    • Staging tables
  • Copy Command
    • COPY command loads data in parallel from S3, EMR, DynamoDB, or multiple data sources on remote hosts.
    • COPY loads large amounts of data much more efficiently than using INSERT statements, and stores the data more effectively as well.
    • Use a Single COPY Command to Load from Multiple Files
    • DON’T use multiple concurrent COPY commands to load one table from multiple files as Redshift is forced to perform a serialized load, which is much slower.
  • Split the Load Data into Multiple Files
    • divide the data in multiple files with equal size (between 1MB and 1GB)
    • number of files to be a multiple of the number of slices in the cluster
    • helps to distribute workload uniformly in the cluster.
  • Use a Manifest File
    • S3 provides eventual consistency for some operations, so it is possible that new data will not be available immediately after the upload, which could result in an incomplete data load or loading stale data.
    • Data consistency can be managed using a manifest file to load data.
    • Manifest file helps specify different S3 locations in a more efficient way that with the use of S3 prefixes.
  • Compress Data Files
    • Individually compress the load files using gzip, lzop, bzip2, or Zstandard for large datasets
    • Avoid using compression, if small amount of data because the benefit of compression would be outweighed by the processing cost of decompression
    • If the priority is to reduce the time spent by COPY commands use LZO compression. In the other hand if the priority is to reduce the size of the files in S3 and the network bandwidth use BZ2 compression.
  • Load Data in Sort Key Order
    • Load the data in sort key order to avoid needing to vacuum.
    • As long as each batch of new data follows the existing rows in the table, the data will be properly stored in sort order, and you will not need to run a vacuum.
    • Presorting rows is not needed in each load because COPY sorts each batch of incoming data as it loads.
  • Load Data using IAM role

Designing Queries

  • Avoid using select *. Include only the columns you specifically need.
  • Use a CASE Expression to perform complex aggregations instead of selecting from the same table multiple times.
  • Don’t use cross-joins unless absolutely necessary
  • Use subqueries in cases where one table in the query is used only for predicate conditions and the subquery returns a small number of rows (less than about 200).
  • Use predicates to restrict the dataset as much as possible.
  • In the predicate, use the least expensive operators that you can.
  • Avoid using functions in query predicates.
  • If possible, use a WHERE clause to restrict the dataset.
  • Add predicates to filter tables that participate in joins, even if the predicates apply the same filters.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An administrator needs to design a strategy for the schema in a Redshift cluster. The administrator needs to determine the optimal distribution style for the tables in the Redshift schema. In which two circumstances would choosing EVEN distribution be most appropriate? (Choose two.)
    1. When the tables are highly denormalized and do NOT participate in frequent joins.
    2. When data must be grouped based on a specific key on a defined slice.
    3. When data transfer between nodes must be eliminated.
    4. When a new table has been loaded and it is unclear how it will be joined to dimension.
  2. An administrator has a 500-GB file in Amazon S3. The administrator runs a nightly COPY command into a 10-node Amazon Redshift cluster. The administrator wants to prepare the data to optimize performance of the COPY command. How should the administrator prepare the data?
    1. Compress the file using gz compression.
    2. Split the file into 500 smaller files.
    3. Convert the file format to AVRO.
    4. Split the file into 10 files of equal size.

AWS Storage Options – SQS & Redshift

SQS

  • is a temporary data repository for messages  and provides a reliable, highly scalable, hosted message queuing service for temporary storage and delivery of short (up to 256 KB) text-based data messages.
  • supports a virtually unlimited number of queues and supports unordered, at-least-once delivery of messages.

Ideal Usage patterns

  • is ideally suited to any scenario where multiple application components must communicate and coordinate their work in a loosely coupled manner particularly producer consumer scenarios
  • can be used to coordinate a multi-step processing pipeline, where each message is associated with a task that must be processed.
  • enables the number of worker instances to scale up or down, and also enable the processing power of each single worker instance to scale up or down, to suit the total workload, without any application changes.

Anti-Patterns

  • Binary or Large Messages
    • SQS is suited for text messages with maximum size of 64 KB. If the application requires binary or messages exceeding the length, it is best to use Amazon S3 or RDS and use SQS to store the pointer
  • Long Term storage
    • SQS stores messages for max 14 days and if application requires storage period longer than 14 days, Amazon S3 or other storage options should be preferred
  • High-speed message queuing or very short tasks
    • If the application requires a very high-speed message send and receive response from a single producer or consumer, use of Amazon DynamoDB or a message-queuing system hosted on Amazon EC2 may be more appropriate.

Performance

  • is a distributed queuing system that is optimized for horizontal scalability, not for single-threaded sending or receiving speeds.
  • A single client can send or receive Amazon SQS messages at a rate of about 5 to 50 messages per second. Higher receive performance can be achieved by requesting multiple messages (up to 10) in a single call.

Durability & Availability

  • are highly durable but temporary.
  • stores all messages redundantly across multiple servers and data centers.
  • Message retention time is configurable on a per-queue basis, from a minimum of one minute to a maximum of 14 days.
  • Messages are retained in a queue until they are explicitly deleted, or until they are automatically deleted upon expiration of the retention time.

Cost Model

  • pricing is based on
    • number of requests and
    • the amount of data transferred in and out (priced per GB per month).

Scalability & Elasticity

  • is both highly elastic and massively scalable.
  • is designed to enable a virtually unlimited number of computers to read and write a virtually unlimited number of messages at any time.
  • supports virtually unlimited numbers of queues and messages per queue for any user.

Amazon Redshift

  • is a fast, fully-managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools.
  • is optimized for datasets that range from a few hundred gigabytes to a petabyte or more.
  • manages the work needed to set up, operate, and scale a data warehouse, from provisioning the infrastructure capacity to automating ongoing administrative tasks such as backups and patching.

Ideal Usage Pattern

  • is ideal for analyzing large datasets using the existing business intelligence tools
  • Common use cases include
    • Analyze global sales data for multiple products
    • Store historical stock trade data
    • Analyze ad impressions and clicks
    • Aggregate gaming data
    • Analyze social trends
    • Measure clinical quality, operation efficiency, and financial
    • performance in the health care space

Anti-Pattern

  • OLTP workloads
    • Redshift is a column-oriented database and more suited for data warehousing and analytics. If application involves online transaction processing, Amazon RDS would be a better choice.
  • Blob data
    • For Blob storage, Amazon S3 would be a better choice with metadata in other storage as RDS or DynamoDB

Performance

  • Amazon Redshift allows a very high query performance on datasets ranging in size from hundreds of gigabytes to a petabyte or more.
  • It uses columnar storage, data compression, and zone maps to reduce the amount of I/O needed to perform queries.
  • It has a massively parallel processing (MPP) architecture that parallelizes and distributes SQL operations to take advantage of all available resources.
  • Underlying hardware is designed for high performance data processing that uses local attached storage to maximize throughput.

Durability & Availability

  • Amazon Redshift stores three copies of your data—all data written to a node in your cluster is automatically replicated to other nodes within the cluster, and all data is continuously backed up to Amazon S3.
  • Snapshots are automated, incremental, and continuous and stored for a user-defined period (1-35 days)
  • Manual snapshots can be created and are retained until explicitly deleted.
  • Amazon Redshift also continuously monitors the health of the cluster and automatically re-replicates data from failed drives and replaces nodes as necessary.

Cost Model

  • has three pricing components:
    • data warehouse node hours – total number of hours run across all the compute node
    • backup storage – storage cost for automated and manual snapshots
    • data transfer
      • There is no data transfer charge for data transferred to or from Amazon Redshift outside of Amazon VPC
      • Data transfer to or from Amazon Redshift in Amazon VPC accrues standard AWS data transfer charges.

Scalability & Elasticity

  • provides push button scaling and the number of nodes can be easily scaled in the data warehouse cluster as the demand changes.
  • Redshift places the existing cluster in the read only mode, so the existing queries can continue to run, while is provisions a new cluster with chosen size and copies the data to it. Once the data is copied, it automatically redirects queries to the new cluster