AWS Database Services Cheat Sheet

AWS Database Services Cheat Sheet

AWS Database Services

Relational Database Service – RDS

  • provides Relational Database service
  • supports MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL Server, and the new, MySQL-compatible Amazon Aurora DB engine
  • as it is a managed service, shell (root ssh) access is not provided
  • manages backups, software patching, automatic failure detection, and recovery
  • supports use initiated manual backups and snapshots
  • daily automated backups with database transaction logs enables Point in Time recovery up to the last five minutes of database usage
  • snapshots are user-initiated storage volume snapshot of DB instance, backing up the entire DB instance and not just individual databases that can be restored as a independent RDS instance
  • RDS Security
    • support encryption at rest using KMS as well as encryption in transit using SSL endpoints
    • supports IAM database authentication, which prevents the need to store static user credentials in the database, because authentication is managed externally using IAM.
    • supports Encryption only during creation of an RDS DB instance
    • existing unencrypted DB cannot be encrypted and you need to create a  snapshot, created a encrypted copy of the snapshot and restore as encrypted DB
    • supports Secret Manager for storing and rotating secrets
    • for encrypted database
      • logs, snapshots, backups, read replicas are all encrypted as well
      • cross region replicas and snapshots does not work across region (Note – this is possible now with latest AWS enhancement)
  • Multi-AZ deployment
    • provides high availability and automatic failover support and is NOT a scaling solution
    • maintains a synchronous standby replica in a different AZ
    • transaction success is returned only if the commit is successful both on the primary and the standby DB
    • Oracle, PostgreSQL, MySQL, and MariaDB DB instances use Amazon technology, while SQL Server DB instances use SQL Server Mirroring
    • snapshots and backups are taken from standby & eliminate I/O freezes
    • during automatic failover, its seamless and RDS switches to the standby instance and updates the DNS record to point to standby
    • failover can be forced with the Reboot with failover option
  • Read Replicas
    • uses the PostgreSQL, MySQL, and MariaDB DB engines’ built-in replication functionality to create a separate Read Only instance
    • updates are asynchronously copied to the Read Replica, and data might be stale
    • can help scale applications and reduce read only load
    • requires automatic backups enabled
    • replicates all databases in the source DB instance
    • for disaster recovery, can be promoted to a full fledged database
    • can be created in a different region for disaster recovery, migration and low latency across regions
    • can’t create encrypted read replicas from unencrypted DB or read replica
  • RDS does not support all the features of underlying databases, and if required the database instance can be launched on an EC2 instance
  • RDS Components
    • DB parameter groups contains engine configuration values that can be applied to one or more DB instances of the same instance type for e.g. SSL, max connections etc.
    • Default DB parameter group cannot be modified, create a custom one and attach to the DB
    • Supports static and dynamic parameters
      • changes to dynamic parameters are applied immediately (irrespective of apply immediately setting)
      • changes to static parameters are NOT applied immediately and require a manual reboot.
  • RDS Monitoring & Notification
    • integrates with CloudWatch and CloudTrail
    • CloudWatch provides metrics about CPU utilization from the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance
    • Performance Insights is a database performance tuning and monitoring feature that helps illustrate the database’s performance and help analyze any issues that affect it
    • supports RDS Event Notification which uses the SNS to provide notification when an RDS event like creation, deletion or snapshot creation etc occurs

Aurora

  • is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases
  • is a managed services and handles time-consuming tasks such as provisioning, patching, backup, recovery, failure detection and repair
  • is a proprietary technology from AWS (not open sourced)
  • provides PostgreSQL and MySQL compatibility
  • is “AWS cloud optimized” and claims 5x performance improvement
    over MySQL on RDS, over 3x the performance of PostgreSQL on RDS
  • scales storage automatically in increments of 10GB, up to 64 TB with no impact to database performance. Storage is striped across 100s of volumes.
  • no need to provision storage in advance.
  • provides self-healing storage. Data blocks and disks are continuously scanned for errors and repaired automatically.
  • provides instantaneous failover
  • replicates each chunk of my the database volume six ways across three Availability Zones i.e. 6 copies of the data across 3 AZ
    • requires 4 copies out of 6 needed for writes
    • requires 3 copies out of 6 need for reads
  • costs more than RDS (20% more) – but is more efficient
  • Read Replicas
    • can have 15 replicas while MySQL has 5, and the replication process is faster (sub 10 ms replica lag)
    • share the same data volume as the primary instance in the same AWS Region, there is virtually no replication lag
    • supports Automated failover for master in less than 30 seconds
    • supports Cross Region Replication using either physical or logical replication.
  • Security
    • supports Encryption at rest using KMS
    • supports Encryption in flight using SSL (same process as MySQL or Postgres)
    • Automated backups, snapshots and replicas are also encrypted
    • Possibility to authenticate using IAM token (same method as RDS)
    • supports protecting the instance with security groups
    • does not support SSH access to the underlying servers
  • Aurora Serverless
    • provides automated database Client  instantiation and on-demand  autoscaling based on actual usage
    • provides a relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads
    • automatically starts up, shuts down, and scales capacity up or down based on the application’s needs. No capacity planning needed
    • Pay per second, can be more cost-effective
  • Aurora Global Database
    • allows a single Aurora database to span multiple AWS regions.
    • provides Physical replication, which uses dedicated infrastructure that leaves the databases entirely available to serve the application
    • supports 1 Primary Region (read / write)
    • replicates across up to 5 secondary (read-only) regions, replication lag is less than 1 second
    • supports up to 16 Read Replicas per secondary region
    • recommended for low-latency global reads and disaster recovery with an RTO of < 1 minute
    • failover is not automated and if the primary region becomes unavailable, a secondary region can be manually removed from an Aurora Global Database and promote it to take full reads and writes. Application needs to be updated to point to the newly promoted region.
  • Aurora Backtrack
    • Backtracking “rewinds” the DB cluster to the specified time
    • Backtracking performs in place restore and does not create a new instance. There is a minimal downtime associated with it.
  • Aurora Clone feature allows quick and cost-effective creation of Aurora Cluster duplicates
  • supports parallel or distributed query using Aurora Parallel Query, which refers to the ability to push down and distribute the computational load of a single query across thousands of CPUs in Aurora’s storage layer.

DynamoDB

  • fully managed NoSQL database service
  • synchronously replicates data across three facilities in an AWS Region, giving high availability and data durability
  • runs exclusively on SSDs to provide high I/O performance
  • provides provisioned table reads and writes
  • automatically partitions, reallocates, and re-partitions the data and provisions additional server capacity as data or throughput changes
  • creates and maintains indexes for the primary key attributes for efficient access to data in the table
  • DynamoDB Table classes currently support
    • DynamoDB Standard table class is the default and is recommended for the vast majority of workloads.
    • DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class which is optimized for tables where storage is the dominant cost.
  • supports Secondary Indexes
    • allows querying attributes other than the primary key attributes without impacting performance.
    • are automatically maintained as sparse objects
  • Local secondary index vs Global secondary index
    • shares partition key + different sort key vs different partition + sort key
    • search limited to partition vs across all partition
    • unique attributes vs non-unique attributes
    • linked to the base table vs independent separate index
    • only created during the base table creation vs can be created later
    • cannot be deleted after creation vs can be deleted
    • consumes provisioned throughput capacity of the base table vs independent throughput
    • returns all attributes for item vs only projected attributes
    • Eventually or Strongly vs Only Eventually consistent reads
    • size limited to 10Gb per partition vs unlimited
  • DynamoDB Consistency
    • provides Eventually consistent (by default) or Strongly Consistent option to be specified during a read operation
    • supports Strongly consistent reads for a few operations like Query, GetItem, and BatchGetItem using the ConsistentRead parameter
  • DynamoDB Throughput Capacity
    • supports On-demand and Provisioned read/write capacity modes
    • Provisioned mode requires the number of reads and writes per second as required by the application to be specified
    • On-demand mode provides flexible billing option capable of serving thousands of requests per second without capacity planning
  • DynamoDB Auto Scaling helps dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
  • DynamoDB Adaptive capacity is a feature that enables DynamoDB to run imbalanced workloads indefinitely.
  • DynamoDB Global Tables provide multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table
  • DynamoDB Time to Live (TTL)
    • enables a per-item timestamp to determine when an item expiry
    • expired items are deleted from the table without consuming any write throughput.
  • DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second.
  • DynamoDB cross-region replication
    • allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
    • using DynamoDB streams which leverages Kinesis and provides time-ordered sequence of item-level changes and can help for lower RPO, lower RTO disaster recovery
  • DynamoDB Triggers (just like database triggers) are a feature that allows the execution of custom actions based on item-level updates on a table.
  • VPC Gateway Endpoints provide private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.

ElastiCache

  • managed web service that provides in-memory caching to deploy and run Memcached or Redis protocol-compliant cache clusters
  • ElastiCache with Redis,
    • like RDS, supports Multi-AZ, Read Replicas and Snapshots
    • Read Replicas are created across AZ within same region using Redis’s asynchronous replication technology
    • Multi-AZ differs from RDS as there is no standby, but if the primary goes down a Read Replica is promoted as primary
    • Read Replicas cannot span across regions, as RDS supports
    • cannot be scaled out and if scaled up cannot be scaled down
    • allows snapshots for backup and restore
    • AOF can be enabled for recovery scenarios, to recover the data in case the node fails or service crashes. But it does not help in case the underlying hardware fails
    • Enabling Redis Multi-AZ as a Better Approach to Fault Tolerance
  • ElastiCache with Memcached
    • can be scaled up by increasing size and scaled out by adding nodes
    • nodes can span across multiple AZs within the same region
    • cached data is spread across the nodes, and a node failure will always result in some data loss from the cluster
    • supports auto discovery
    • every node should be homogenous and of same instance type
  • ElastiCache Redis vs Memcached
    • complex data objects vs simple key value storage
    • persistent vs non persistent, pure caching
    • automatic failover with Multi-AZ vs Multi-AZ not supported
    • scaling using Read Replicas vs using multiple nodes
    • backup & restore supported vs not supported
  • can be used state management to keep the web application stateless

Redshift

  • fully managed, fast and powerful, petabyte scale data warehouse service
  • uses replication and continuous backups to enhance availability and improve data durability and can automatically recover from node and component failures
  • provides Massive Parallel Processing (MPP) by distributing & parallelizing queries across multiple physical resources
  • columnar data storage improving query performance and allowing advance compression techniques
  • only supports Single-AZ deployments and the nodes are available within the same AZ, if the AZ supports Redshift clusters
  • spot instances are NOT an option

AWS RDS Aurora Serverless

Aurora Serverless

  • Amazon Aurora Serverless is an on-demand, autoscaling configuration for the MySQL-compatible and PostgreSQL-compatible editions of Aurora.
  • An Aurora Serverless DB cluster automatically starts up, shuts down, and scales capacity up or down based on the application’s needs.
  • enables running database in the cloud without managing any database instances.
  • provides a relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads.
  • use Cases include
    • Infrequently-Used Applications
    • New Applications – where the needs and instance size is yet to be determined.
    • Variable and Unpredictable Workloads – scale as per the needs
    • Development and Test Databases
    • Multi-tenant Applications
  • DB cluster does not have a public IP address and can be accessed only from within a VPC based on the VPC service.

Aurora Architecture

 Aurora Serverless Architecture

  • Aurora Serverless separates Storage and Compute, so it can scale down to zero processing and you pay only for storage.
  • A database endpoint is created without specifying the DB instance class size.
  • Minimum and maximum capacity is set in terms of Aurora capacity units (ACUs). Each ACU is a combination of processing and memory capacity.
  • Database storage automatically scales from 10 GiB to 64 TiB, the same as storage in a standard Aurora DB cluster.
  • The minimum Aurora capacity unit is the lowest ACU to which the DB cluster can scale down. The maximum Aurora capacity unit is the highest ACU to which the DB cluster can scale up. Based on the settings, Aurora Serverless automatically creates scaling rules for thresholds for CPU utilization, connections, and available memory.
  • Database endpoint connects to a proxy fleet that routes the workload to a fleet of resources that are automatically scaled.
  • Aurora Serverless manages the connections automatically.
  • Proxy fleet enables continuous connections as Aurora Serverless scales the resources automatically based on the minimum and maximum capacity specifications.
  • Database client applications don’t need to change to use the proxy fleet.
  • Scaling is rapid because it uses a pool of “warm” resources that are always ready to service requests.
  • Aurora Serverless supports Automatic Pause where the DB cluster can be paused after a given amount of time with no activity. The default inactvity timeout is five minutes. Pausing the DB cluster can be disabled.
  • Automatic Pause reduces the compute charges to zero and only storage is charged. If database connections are requested when an Aurora Serverless DB cluster is paused, the DB cluster automatically resumes and services the connection requests.
  • When the DB cluster resumes activity, it has the same capacity as it had when Aurora paused the cluster. The number of ACUs depends on how much Aurora scaled the cluster up or down before pausing it.

Aurora Serverless and Failover

  • Aurora Serverless compute layer is placed in a Single AZ
  • separates computation capacity and storage, and the storage volume for the cluster is spread across multiple AZs. The data remains available even if outages affect the DB instance or the associated AZ.
  • supports automatic multi-AZ failover where if the DB instance for a DB cluster becomes unavailable or the Availability Zone (AZ) it is in fails, Aurora recreates the DB instance in a different AZ.
  • failover mechanism takes longer than for an Aurora Provisioned cluster.
  • failover time is currently undefined because it depends on demand and capacity available in other AZs within the given AWS Region

Aurora Serverless Auto Scaling

  • Aurora Serverless automatically scales based on the active database workload ( CPU or connections), in some cases, capacity might not scale fast enough to meet a sudden workload change, such as a large number of new transactions.
  • Once a scaling operation is initiated, Aurora Serverless attempts to find a scaling point, which is a point in time at which the database can safely complete scaling.
  • might not be able to find a scaling point and will not scale if there are
    • long-running queries or transactions in progress, or
    • temporary tables or table locks in use.
  • Supports cooldown period
    • After Scale up, it has a 15 minutes cooldown period for subsequent scale down
    • After Scale down, it has a 310 secs cooldown period for subsequent scale down
  • has no cooldown period for scaling up activities and scales as and when necessary

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_Aurora_Serverless

AWS Aurora Global Database vs. DynamoDB Global Tables

AWS Aurora Global Database vs DynamoDB Global Tables

AWS Aurora Global Database vs. DynamoDB Global Tables

Aurora Global Database

  • Aurora Global Database provides a relational database supporting MySQL and PostgreSQL
  • Aurora Global Database consists of one primary AWS Region where the data is mastered, and up to five read-only, secondary AWS Regions.
  • Aurora cluster in the primary AWS Region where the data is mastered performs both read and write operations. The clusters in the secondary Regions enable low-latency reads.
  • Aurora replicates data to the secondary AWS Regions with a typical latency of under a second.
  • Secondary clusters can be scaled independently by adding one or more DB instances (Aurora Replicas) to serve read-only workloads.
  • Aurora Global Database uses dedicated infrastructure to replicate the data, leaving database resources available entirely to serve applications.
  • Applications with a worldwide footprint can use reader instances in the secondary AWS Regions for low-latency reads.
  • Typical cross-region replication takes less than 1 second.
  • In case of a disaster or an outage, one of the clusters in a secondary AWS Region can be promoted to take full read/write workloads in under a minute.
  • However, the process is not automatic. If the primary region becomes unavailable, you can manually remove a secondary region from an Aurora Global Database and promote it to take full reads and writes. You will also need to point the application to the newly promoted region.

DynamoDB Global Tables

  • DynamoDB Global tables provide NoSQL database.
  • DynamoDB Global tables provide a fully managed, multi-Region, and multi-active database that delivers fast, local, read and write performance for massively scaled, global applications.
  • Global tables replicate the DynamoDB tables automatically across the choice of AWS Regions and enable reads and writes on all instances.
  • DynamoDB global table consists of multiple replica tables (one per AWS Region). Every replica has the same table name and the same primary key schema. When an application writes data to a replica table in one Region, DynamoDB propagates the write to the other replica tables in the other AWS Regions automatically.
  • Global tables enable the read and write of data locally providing single-digit-millisecond latency for the globally distributed application at any scale. It provides asynchronous replication with approximately 1-second replication latency for tables between two or more Regions.
  • DynamoDB Global tables are designed for 99.999% availability.
  • DynamoDB Global tables enable the applications to stay highly available even in the unlikely event of isolation or degradation of an entire Region, your application can redirect to a different Region and perform reads and writes against a different replica table.

AWS Aurora Global Database vs DynamoDB Global Tables

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company needs to implement a relational database with a multi-region disaster recovery Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of 1 minute. Which AWS solution can achieve this?
    1. Amazon Aurora Global Database
    2. Amazon DynamoDB global tables
    3. Amazon RDS for MySQL with Multi-AZ enabled
    4. Amazon RDS for MySQL with a cross-Region snapshot copy

References

AWS RDS Aurora

AWS Aurora Architecture

AWS RDS Aurora

  • AWS RDS Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases.
  • is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine i.e. applications developed with MySQL can switch to Aurora with little or no changes.
  • delivers up to 5x the performance of MySQL and up to 3x the performance of PostgreSQL without requiring any changes to most MySQL applications
  • is fully managed as RDS manages the databases, handling time-consuming tasks such as provisioning, patching, backup, recovery, failure detection, and repair.
  • can scale storage automatically, based on the database usage, from 10GB to 128TiB in 10GB increments with no impact on database performance

Aurora DB Clusters

AWS Aurora Architecture

  • Aurora DB cluster consists of one or more DB instances and a cluster volume that manages the data for those DB instances.
  • A cluster volume is a virtual database storage volume that spans multiple AZs, with each AZ having a copy of the DB cluster data
  • Two types of DB instances make up an Aurora DB cluster:
    • Primary DB instance
      • Supports read and write operations, and performs all data modifications to the cluster volume.
      • Each DB cluster has one primary DB instance.
    • Aurora Replica
      • Connects to the same storage volume as the primary DB instance and supports only read operations.
      • Each DB cluster can have up to 15 Aurora Replicas in addition to the primary DB instance.
      • Provides high availability by locating Replicas in separate AZs
      • Aurora automatically fails over to a Replica in case the primary DB instance becomes unavailable.
      • Failover priority for Replicas can be specified.
      • Replicas can also offload read workloads from the primary DB instance
  • For Aurora multi-master clusters
    • all DB instances have read/write capability, with no difference between primary and replica.

Aurora Connection Endpoints

  • Aurora involves a cluster of DB instances instead of a single instance
  • Endpoint refers to an intermediate handler with the hostname and port specified to connect to the cluster
  • Aurora uses the endpoint mechanism to abstract these connections

Cluster endpoint

  • Cluster endpoint (or writer endpoint) for a DB cluster connects to the current primary DB instance for that DB cluster.
  • Cluster endpoint is the only one that can perform write operations such as DDL statements as well as read operations
  • Each DB cluster has one cluster endpoint and one primary DB instance
  • Cluster endpoint provides failover support for read/write connections to the DB cluster. If a DB cluster’s current primary DB instance fails, Aurora automatically fails over to a new primary DB instance.
  • During a failover, the DB cluster continues to serve connection requests to the cluster endpoint from the new primary DB instance, with minimal interruption of service.

Reader endpoint

  • Reader endpoint for a DB cluster provides load-balancing support for read-only connections to the DB cluster.
  • Use the reader endpoint for read operations, such as queries.
  • Reader endpoint reduces the overhead on the primary instance by processing the statements on the read-only Replicas.
  • Each DB cluster has one reader endpoint.
  • If the cluster contains one or more Replicas, the reader endpoint load balances each connection request among the Replicas.

Custom endpoint

  • Custom endpoint for a DB cluster represents a set of DB instances that you choose.
  • Aurora performs load balancing and chooses one of the instances in the group to handle the connection.
  • An Aurora DB cluster has no custom endpoints until one is created and up to five custom endpoints can be created for each provisioned cluster.
  • Aurora Serverless clusters do not support custom endpoints.

Instance endpoint

  • An instance endpoint connects to a specific DB instance within a cluster and provides direct control over connections to the DB cluster.
  • Each DB instance in a DB cluster has its own unique instance endpoint. So there is one instance endpoint for the current primary DB instance of the DB cluster, and there is one instance endpoint for each of the Replicas in the DB cluster.

High Availability and Replication

  • Aurora is designed to offer greater than 99.99% availability
  • provides data durability and reliability
    • by replicating the database volume six ways across three Availability Zones in a single region
    • backing up the data continuously to  S3.
  • transparently recovers from physical storage failures; instance failover typically takes less than 30 seconds.
  • automatically fails over to a new primary DB instance, if the primary DB instance fails, by either promoting an existing Replica to a new primary DB instance or creating a new primary DB instance
  • automatically divides the database volume into 10GB segments spread across many disks. Each 10GB chunk of the database volume is replicated six ways, across three Availability Zones
  • is designed to transparently handle
    • the loss of up to two copies of data without affecting database write availability and
    • up to three copies without affecting read availability.
  • provides self-healing storage. Data blocks and disks are continuously scanned for errors and repaired automatically.
  • Replicas share the same underlying volume as the primary instance. Updates made by the primary are visible to all Replicas.
  • As Replicas share the same data volume as the primary instance, there is virtually no replication lag.
  • Any Replica can be promoted to become primary without any data loss and therefore can be used for enhancing fault tolerance in the event of a primary DB Instance failure.
  • To increase database availability, 1 to 15 replicas can be created in any of 3 AZs, and RDS will automatically include them in failover primary selection in the event of a database outage.

Aurora Failovers

  • Aurora automatically fails over, if the primary instance in a DB cluster fails, in the following order:
    • If Aurora Read Replicas are available, promote an existing Read Replica to the new primary instance.
    • If no Read Replicas are available, then create a new primary instance.
  • If there are multiple Aurora Read Replicas, the criteria for promotion is based on the priority that is defined for the Read Replicas.
    • Priority numbers can vary from 0 to 15 and can be modified at any time.
    • PostgreSQL promotes the Aurora Replica with the highest priority to the new primary instance.
    • For Read Replicas with the same priority, PostgreSQL promotes the replica that is largest in size or in an arbitrary manner.
  • During the failover, AWS modifies the cluster endpoint to point to the newly created/promoted DB instance.
  • Applications experience a minimal interruption of service if they connect using the cluster endpoint and implement connection retry logic.

Security

  • Aurora uses SSL (AES-256) to secure the connection between the database instance and the application
  • allows database encryption using keys managed through AWS Key Management Service (KMS).
  • Encryption and decryption are handled seamlessly.
  • With encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, snapshots, and replicas in the same cluster.
  • Encryption of existing unencrypted Aurora instances is not supported. Create a new encrypted Aurora instance and migrate the data

Backup and Restore

  • Automated backups are always enabled on Aurora DB Instances.
  • Backups do not impact database performance.
  • Aurora also allows the creation of manual snapshots.
  • Aurora automatically maintains 6 copies of the data across 3 AZs and will automatically attempt to recover the database in a healthy AZ with no data loss.
  • If in any case, the data is unavailable within Aurora storage,
    • DB Snapshot can be restored or
    • the point-in-time restore operation can be performed to a new instance. The latest restorable time for a point-in-time restore operation can be up to 5 minutes in the past.
  • Restoring a snapshot creates a new Aurora DB instance
  • Deleting the database deletes all the automated backups (with an option to create a final snapshot), but would not remove the manual snapshots.
  • Snapshots (including encrypted ones) can be shared with other AWS accounts

Aurora Parallel Query

  • Aurora Parallel Query refers to the ability to push down and distribute the computational load of a single query across thousands of CPUs in Aurora’s storage layer.
  • Without Parallel Query, a query issued against an Aurora database would be executed wholly within one instance of the database cluster; this would be similar to how most databases operate.
  • Parallel Query is a good fit for analytical workloads requiring fresh data and good query performance, even on large tables.
  • Parallel Query provides the following benefits
    • Faster performance: Parallel Query can speed up analytical queries by up to 2 orders of magnitude.
    • Operational simplicity and data freshness: you can issue a query directly over the current transactional data in your Aurora cluster.
    • Transactional and analytical workloads on the same database: Parallel Query allows Aurora to maintain high transaction throughput alongside concurrent analytical queries.
  • Parallel Query can be enabled and disabled dynamically at both the global and session level using the aurora_pq parameter.
  • Parallel Query is available for the MySQL 5.6-compatible version of Aurora

Aurora Scaling

  • Aurora storage scaling is built-in and will automatically grow, up to 64 TB (soft limit), in 10GB increments with no impact on database performance.
  • There is no need to provision storage in advance
  • Compute Scaling
    • Instance scaling
      • Vertical scaling of the master instance. Memory and CPU resources are modified by changing the DB Instance class.
      • scaling the read replica and promoting it to master using forced failover which provides a minimal downtime
    • Read scaling
      • provides horizontal scaling with up to 15 read replicas
  • Auto Scaling
    • Scaling policies to add read replicas with min and max replica count based on scaling CloudWatch CPU or connections metrics condition

Aurora Backtrack

  • Backtracking “rewinds” the DB cluster to the specified time
  • Backtracking performs in-place restore and does not create a new instance. There is minimal downtime associated with it.
  • Backtracking is available for Aurora with MySQL compatibility
  • Backtracking is not a replacement for backing up the DB cluster so that you can restore it to a point in time.
  • With backtracking, there is a target backtrack window and an actual backtrack window:
    • Target backtrack window is the amount of time you WANT the DB cluster can be backtracked for e.g 24 hours. The limit for a backtrack window is 72 hours.
    • Actual backtrack window is the actual amount of time you CAN backtrack the DB cluster, which can be smaller than the target backtrack window. The actual backtrack window is based on the workload and the storage available for storing information about database changes, called change records
  • DB cluster with backtracking enabled generates change records.
  • Aurora retains change records for the target backtrack window and charges an hourly rate for storing them.
  • Both the target backtrack window and the workload on the DB cluster determine the number of change records stored.
  • Workload is the number of changes made to the DB cluster in a given amount of time. If the workload is heavy, you store more change records in the backtrack window than you do if your workload is light.
  • Backtracking affects the entire DB cluster and can’t selectively backtrack a single table or a single data update.
  • Backtracking provides the following advantages over traditional backup and restore:
    • Undo mistakes – revert destructive action, such as a DELETE without a WHERE clause
    • Backtrack DB cluster quickly – Restoring a DB cluster to a point in time launches a new DB cluster and restores it from backup data or a DB cluster snapshot, which can take hours. Backtracking a DB cluster doesn’t require a new DB cluster and rewinds the DB cluster in minutes.
    • Explore earlier data changes – repeatedly backtrack a DB cluster back and forth in time to help determine when a particular data change occurred

Aurora Serverless

  • Amazon Aurora Serverless is an on-demand, autoscaling configuration for the MySQL-compatible and PostgreSQL-compatible editions of Aurora.
  • An Aurora Serverless DB cluster automatically starts up, shuts down, and scales capacity up or down based on the application’s needs.
  • enables running database in the cloud without managing any database instances.
  • provides a relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads.
  • use cases include
    • Infrequently-Used Applications
    • New Applications – where the needs and instance size are yet to be determined.
    • Variable and Unpredictable Workloads – scale as per the needs
    • Development and Test Databases
    • Multi-tenant Applications
  • DB cluster does not have a public IP address and can be accessed only from within a VPC based on the VPC service.

Aurora Global Database

  • Aurora global database consists of one primary AWS Region where the data is mastered, and up to five read-only, secondary AWS Regions.
  • Aurora cluster in the primary AWS Region where your data is mastered performs both read and write operations. The clusters in the secondary Regions enable low-latency reads.
  • Aurora replicates data to the secondary AWS Regions with a typical latency of under a second.
  • Secondary clusters can be scaled independently by adding one of more DB instances (Aurora Replicas) to serve read-only workloads.
  • Aurora global database uses dedicated infrastructure to replicate the data, leaving database resources available entirely to serve applications.
  • Applications with a worldwide footprint can use reader instances in the secondary AWS Regions for low-latency reads.
  • In case of a disaster or an outage, one of the cluster in a secondary AWS Regions can be promoted to take full read/write workloads in under a min.

Aurora Clone

  • Aurora cloning feature helps create Aurora cluster duplicates quickly and cost-effectively
  • Creating a clone is faster and more space-efficient than physically copying the data using a different technique such as restoring a snapshot.
  • Aurora cloning uses a copy-on-write protocol.
  • Aurora clone requires only minimal additional space when first created. In the beginning, Aurora maintains a single copy of the data, which is used by both the original and new DB clusters.
  • Aurora allocates new storage only when data changes, either on the source cluster or the cloned cluster.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Company wants to use MySQL compatible relational database with greater performance. Which AWS service can be used?
    1. Aurora
    2. RDS
    3. SimpleDB
    4. DynamoDB
  2. An application requires a highly available relational database with an initial storage capacity of 8 TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads. Which option will meet these requirements?
    1. DynamoDB
    2. Amazon S3
    3. Amazon Aurora
    4. Amazon Redshift
  3. A company is migrating their on-premise 10TB MySQL database to AWS. As a compliance requirement, the company wants to have the data replicated across three availability zones. Which Amazon RDS engine meets the above business requirement?
    1. Use Multi-AZ RDS
    2. Use RDS
    3. Use Aurora
    4. Use DynamoDB

References

AWS_Aurora