AWS RDS DB Snapshot, Backup & Restore

RDS BackUp, Restore and Snapshots

  • RDS creates a storage volume snapshot of the DB instance, backing up the entire DB instance and not just individual databases.
  • RDS provides two different methods Automated and Manual for backing up your DB instances:

Automated backups

  • Backups of the DB instance are automatically created and retained
  • Automated backups are enabled by default for a new DB instance.
  • Automated backups occur during a daily user-configurable period of time, known as preferred backup window.
    • If a preferred backup window is not specified when an DB instance is created, RDS assigns a default 30-minute backup window which is selected at random from an 8-hour block of time per region.
    • Changes to the backup window take effect immediately.
    • Backup window cannot overlap with the weekly maintenance window for the DB instance.
  • Backups created during the backup window are retained for a user-configurable number of days , known as backup retention period
    • If the backup retention period is not set, RDS defaults the period retention period to one day, if created using RDS API or the AWS CLI, or seven days if created AWS Console
    • Backup retention period can be modified with valid values are 0 (for no backup retention) to a maximum of 35 days.
  • Manual snapshot limits (50 per region) do not apply to automated backups
  • If the backup requires more time than allotted to the backup window, the backup will continue to completion.
  • An immediate outage occurs if the backup retention period is changed
    • from 0 to a non-zero value as the first backup occurs immediately or
    • from non-zero value to 0 as it turns off automatic backups, and deletes all existing automated backups for the instance.
  • RDS uses the periodic data backups in conjunction with the transaction logs to enable restoration of the DB Instance to any second during the retention period, up to the LatestRestorableTime (typically up to the last few minutes).
  • During the backup window,
    • for Single AZ instance, storage I/O may be briefly suspended while the backup process initializes (typically under a few seconds) and a brief period of elevated latency might be experienced.
    • for Multi-AZ DB deployments, there is No I/O suspension since the backup is taken from the standby instance
  • First backup is a full backup, while the others are incremental.
  • Automated DB backups are deleted when
    • the retention period expires
    • the automated DB backups for a DB instance is disabled
    • the DB instance is deleted
  • When a DB instance is deleted,
    • a final DB snapshot can be created upon deletion; which can be used to restore the deleted DB instance at a later date.
    • RDS retains the final user-created DB snapshot along with all other manually created DB snapshots
    • all automated backups are deleted and cannot be recovered

Point-In-Time Recovery

  • In addition to the daily automated backup, RDS archives database change logs. This enables recovery of the database to any point in time during the backup retention period, up to the last five minutes of database usage.
  • Disabling automated backups also disables point-in-time recovery
  • RDS stores multiple copies of your data, but for Single-AZ DB instances these copies are stored in a single availability zone.
  • If for any reason a Single-AZ DB instance becomes unusable, point-in-time recovery can be used to launch a new DB instance with the latest restorable data

DB Snapshots (User Initiated – Manual)

  • DB snapshots are manual, user-initiated backups that enables to back up a DB instance to a known state, and restore to that specific state at any time
  • RDS keeps all manual DB snapshots until explicitly deleted

DB Snapshots Creation

  • DB snapshot is a user-initiated storage volume snapshot of DB instance, backing up the entire DB instance and not just individual databases.
  • DB snapshots enable backing up of the DB instance in a known state as needed, and can then be restored to that specific state at any time.
  • DB snapshots are kept until explicitly deleted
  • Creating DB snapshot on a Single-AZ DB instance results in a brief I/O suspension that typically lasting no more than a few minutes.
  • Multi-AZ DB instances are not affected by this I/O suspension since the backup is taken on the standby instance

DB Snapshot Restore

  • DB instance can be restored to any specific time during this retention period, creating a new DB instance.
  • DB restore creates a New DB instance with a different endpoint
  • RDS uses the periodic data backups in conjunction with the transaction logs to enable restoration of the DB Instance to any second during the retention period, up to the LatestRestorableTime (typically up to the last few minutes).
  • Option group associated with the DB snapshot is associated with the restored DB instance once it is created. However, option group is associated with the VPC, so would apply only when the instance is restored in the same VPC as the DB snapshot.
  • Default DB parameter and security groups are associated with the restored instance. After the restoration is complete, any custom DB parameter or security groups used by the instance restored from should be associated explicitly.
  • A DB instance can be restored with a different storage type than the source DB snapshot. In this case the restoration process will be slower because of the additional work required to migrate the data to the new storage type for e.g. from GP2 to Provisioned IOPS
  • A DB instance can be restored with a different edition of the DB engine only if the DB snapshot has the required storage allocated for the new edition for e.g., to change from SQL Server Web Edition to SQL Server Standard Edition, the DB snapshot must have been created from a SQL Server DB instance that had at least 200 GB of allocated storage, which is the minimum allocated storage for SQL Server Standard edition

DB Snapshot Copy

  • RDS supports two types of DB snapshot copying.
    • Copy an automated DB snapshot to create a manual DB snapshot in the same AWS region. Manual DB snapshot are not deleted automatically and can be kept indefinitely.
    • Copy either an automated or manual DB snapshot from one region to another region. By copying the DB snapshot to another region, a manual DB snapshot is created that is retained in that region
  • Automated backups cannot be shared. They need to be copied to a manual snapshot, and the manual snapshot can be shared.
  • Manual DB snapshots can be shared with other AWS accounts and copy DB snapshots shared to you by other AWS accounts
  • Snapshot Copy Encryption
    • DB snapshot that has been encrypted using an AWS Key Management System (AWS KMS) encryption key can be copied
    • Copying an encrypted DB snapshot, results in an encrypted copy of the DB snapshot
    • When copying, DB snapshot can either be encrypted with the same KMS encryption key as the original DB snapshot, or a different KMS encryption key to encrypt the copy of the DB snapshot.
    • An unencrypted DB snapshot can be copied to an encrypted snapshot, a quick way to add encryption to a previously encrypted DB instance.
    • Encrypted snapshot can be restored only to an encrypted DB instance
    • If a KMS encryption key is specified when restoring from an unencrypted DB cluster snapshot, the restored DB cluster is encrypted using the specified KMS encryption key
    • Copying an encrypted snapshot shared from another AWS account, requires access to the KMS encryption key that was used to encrypt the DB snapshot.
    • Because KMS encryption keys are specific to the region that they are created in, encrypted snapshot cannot be copied to another region
    • NOTE – AWS now allows copying encrypted DB snapshots between accounts and across multiple regions as seamlessly as unencrypted snapshots. Refer blog post

DB Snapshot Sharing

  • Manual DB snapshot or DB cluster snapshot can be shared with up to 20 other AWS accounts.
  • Manual snapshot shared with other AWS accounts can copy the snapshot, or restore a DB instance or DB cluster from that snapshot.
  • Manual snapshot can also be shared as public, which makes the snapshot available to all AWS accounts. Care should be taken when sharing a snapshot as public so that none of the private information is included
  • Shared snapshot can be copied to another region.
  • However, following limitations apply when sharing manual snapshots with other AWS accounts:
    • When a DB instance or DB cluster is restored from a shared snapshot using the AWS CLI or RDS API, the Amazon Resource Name (ARN) of the shared snapshot as the snapshot identifier should be specified
    • DB snapshot that uses an option group with permanent or persistent options cannot be shared
    • A permanent option cannot be removed from an option group. Option groups with persistent options cannot be removed from a DB instance once the option group has been assigned to the DB instance.
  • DB snapshots that have been encrypted “at rest” using the AES-256 encryption algorithm can be shared
  • Users can only copy encrypted DB snapshots if they have access to the AWS Key Management Service (AWS KMS) encryption key that was used to encrypt the DB snapshot.
  • AWS KMS encryption keys can be shared with another AWS account by adding the other account to the KMS key policy.
  • However, KMS key policy must first be updated by adding any accounts to share the snapshot with, before sharing an encrypted DB snapshot

[do_widget id=text-15]

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Amazon RDS automated backups and DB Snapshots are currently supported for only the __________ storage engine
    1. InnoDB
    2. MyISAM
  2. Automated backups are enabled by default for a new DB Instance.
    1. TRUE
    2. FALSE
  3. Amazon RDS DB snapshots and automated backups are stored in
    1. Amazon S3
    2. Amazon EBS Volume
    3. Amazon RDS
    4. Amazon EMR
  4. You receive a frantic call from a new DBA who accidentally dropped a table containing all your customers. Which Amazon RDS feature will allow you to reliably restore your database to within 5 minutes of when the mistake was made?
    1. Multi-AZ RDS
    2. RDS snapshots
    3. RDS read replicas
    4. RDS automated backup
  5. Disabling automated backups ______ disable the point-in-time recovery.
    1. if configured to can
    2. will never
    3. will
  6. Changes to the backup window take effect ______.
    1. from the next billing cycle
    2. after 30 minutes
    3. immediately
    4. after 24 hours
  7. You can modify the backup retention period; valid values are 0 (for no backup retention) to a maximum of ___________ days.
    1. 45
    2. 35
    3. 15
    4. 5
  8. Amazon RDS automated backups and DB Snapshots are currently supported for only the ______ storage engine
    1. MyISAM
    2. InnoDB 
  9. What happens to the I/O operations while you take a database snapshot?
    1. I/O operations to the database are suspended for a few minutes while the backup is in progress.
    2. I/O operations to the database are sent to a Replica (if available) for a few minutes while the backup is in progress.
    3. I/O operations will be functioning normally
    4. I/O operations to the database are suspended for an hour while the backup is in progress
  10. True or False: When you perform a restore operation to a point in time or from a DB Snapshot, a new DB Instance is created with a new endpoint.
    1. FALSE
    2. TRUE 
  11. True or False: Manually created DB Snapshots are deleted after the DB Instance is deleted.
    1. TRUE
    2. FALSE
  12. A user is running a MySQL RDS instance. The user will not use the DB for the next 3 months. How can the user save costs?
    1. Pause the RDS activities from CLI until it is required in the future
    2. Stop the RDS instance
    3. Create a snapshot of RDS to launch in the future and terminate the instance now
    4. Change the instance size to micro

References

AWS Disaster Recovery – Whitepaper – Certification

AWS Disaster Recovery Whitepaper

AWS Disaster Recovery Whitepaper is one of the very important Whitepaper for both the Associate & Professional AWS Certification exam

Disaster Recovery Overview

  • AWS Disaster Recovery whitepaper highlights AWS services and features that can be leveraged for disaster recovery (DR) processes to significantly minimize the impact on data, system, and overall business operations.
  • It outlines best practices to improve your DR processes, from minimal investments to full-scale availability and fault tolerance, and describes how AWS services can be used to reduce cost and ensure business continuity during a DR event
  • Disaster recovery (DR) is about preparing for and recovering from a disaster. Any event that has a negative impact on a company’s business continuity or finances could be termed a disaster. One of the AWS best practice is to always design your systems for failures

Disaster Recovery Key AWS services

  1. Region
    • AWS services are available in multiple regions around the globe, and the DR site location can be selected as appropriate, in addition to the primary site location
  2. Storage
    • Amazon S3
      • provides a highly durable (99.999999999%) storage infrastructure designed for mission-critical and primary data storage.
      • stores Objects redundantly on multiple devices across multiple facilities within a region
    • Amazon Glacier
      • provides extremely low-cost storage for data archiving and backup.
      • Objects are optimized for infrequent access, for which retrieval times of several (3-5) hours are adequate.
    • Amazon EBS
      • provides the ability to create point-in-time snapshots of data volumes.
      • Snapshots can then be used to create volumes and attached to running instances
    • Amazon Storage Gateway
      • a service that provides seamless and highly secure integration between on-premises IT environment and the storage infrastructure of AWS.
    • AWS Import/Export
      • accelerates moving large amounts of data into and out of AWS by using portable storage devices for transport bypassing the Internet
      • transfers data directly onto and off of storage devices by means of the high-speed internal network of Amazon
  3. Compute
    • Amazon EC2
      • provides resizable compute capacity in the cloud which can be easily created and scaled.
      • EC2 instance creation using Preconfigured AMIs
      • EC2 instances can be launched in multiple AZs, which are engineered to be insulated from failures in other AZs
  4. Networking
    • Amazon Route 53
      • is a highly available and scalable DNS web service
      • includes a number of global load-balancing capabilities that can be effective when dealing with DR scenarios for e.g. DNS endpoint health checks and the ability to failover between multiple endpoints 
    • Elastic IP
      • addresses enables masking of instance or Availability Zone failures by programmatically remapping
      • addresses are static IP addresses designed for dynamic cloud computing.
    • Elastic Load Balancing (ELB)
      • performs health checks and automatically distributes incoming application traffic across multiple EC2 instances
    • Amazon Virtual Private Cloud (Amazon VPC)
      • allows provisioning of a private, isolated section of the AWS cloud where resources can be launched in a defined virtual network
    • Amazon Direct Connect
      • makes it easy to set up a dedicated network connection from on-premises environment to AWS
  5. Databases
    • RDS, DynamoDb, Redshift provided as a fully managed RDBMS, NoSQL and data warehouse solutions which can scale up easily
    • DynamoDB offers cross region replication
    • RDS provides Multi-AZ and Read Replicas and also ability to snapshot data from one region to other
  6. Deployment Orchestration
    • CloudFormation
      • gives developers and systems administrators an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion
    • Elastic Beanstalk
      • is an easy-to-use service for deploying and scaling web applications and services
    • OpsWorks
      • is an application management service that makes it easy to deploy and operate applications of all types and sizes.
      • Environment can be defined as a series of layers, and each layer can be configured as a tier of the application.
      • has automatic host replacement, so in the event of an instance failure it will be automatically replaced.
      • can be used in the preparation phase to template the environment, and combined with AWS CloudFormation in the recovery phase.
      • Stacks can be quickly provisioned from the stored configuration to support the defined RTO.

Key factors for Disaster Planning

Disaster Recovery RTO RPO Defination

Recovery Time Objective (RTO) – The time it takes after a disruption to restore a business process to its service level, as defined by the operational level agreement (OLA) for e.g. if the RTO is 1 hour and disaster occurs @ 12:00 p.m (noon), then the DR process should restore the systems to an acceptable service level within an hour i.e. by 1:00 p.m

Recovery Point Objective (RPO) – The acceptable amount of data loss measured in time before the disaster occurs. for e.g., if a disaster occurs at 12:00 p.m (noon) and the RPO is one hour, the system should recover all data that was in the system before 11:00 a.m.

Disaster Recovery Scenarios

  • Disaster Recovery scenarios can be implemented with the Primary infrastructure running in your data center in conjunction with the AWS
  • Disaster Recovery Scenarios still apply if Primary site is running in AWS using AWS multi region feature.
  • Combination and variation of the below is always possible.

Disaster Recovery Scenarios Options

  1. Backup & Restore (Data backed up and restored)
  2. Pilot Light (Only Minimal critical functionalities)
  3. Warm Standby (Fully Functional Scaled down version)
  4. Multi-Site (Active-Active)

For the DR scenarios options, RTO and RPO reduces with an increase in Cost as you move from Backup & Restore option (left) to Multi-Site option (right)

Backup & Restore

AWS can be used to backup the data in a cost effective, durable and secure manner as well as recover the data quickly and reliably.

Backup phase

In most traditional environments, data is backed up to tape and sent off-site regularly taking longer time to restore the system in the event of a disruption or disasterBackup Restore - Backup Phase

  1. Amazon S3 can be used to backup the data and perform a quick restore and is also available from any location
  2. AWS Import/Export can be used to transfer large data sets by shipping storage devices directly to AWS bypassing the Internet
  3. Amazon Glacier can be used for archiving data, where retrieval time of several hours are adequate and acceptable
  4. AWS Storage Gateway enables snapshots (used to created EBS volumes) of the on-premises data volumes to be transparently copied into S3 for backup. It can be used either as a backup solution (Gateway-stored volumes) or as a primary data store (Gateway-cached volumes)
  5. AWS Direct connect can be used to transfer data directly from On-Premise to Amazon consistently and at high speed
  6. Snapshots of Amazon EBS volumes, Amazon RDS databases, and Amazon Redshift data warehouses can be stored in Amazon S3

Restore phase

Data backed up then can be used to quickly restore and create Compute and Database instances

Backup Restore - Recovery PhaseKey steps for Backup and Restore:
1. Select an appropriate tool or method to back up the data into AWS.
2. Ensure an appropriate retention policy for this data.
3. Ensure appropriate security measures are in place for this data, including encryption and access policies.
4. Regularly test the recovery of this data and the restoration of the system.

Pilot Light

In a Pilot Light Disaster Recovery scenario option a minimal version of an environment is always running in the cloud, which basically host the critical functionalities of the application for e.g. databases

In this approach :-

  1. Maintain a pilot light by configuring and running the most critical core elements of your system in AWS for e.g. Databases where the data needs to be replicated and kept updated.
  2. During recovery, a full-scale production environment, for e.g. application and web servers, can be rapidly provisioned (using preconfigured AMIs and EBS volume snapshots) around the critical core
  3. For Networking, either a ELB to distribute traffic to multiple instances and have DNS point to the load balancer or preallocated Elastic IP address with instances associated can be used
Preparation phase steps :
  1. Set up Amazon EC2 instances or RDS instances to replicate or mirror data critical data
  2. Ensure that all supporting custom software packages available in AWS.
  3. Create and maintain AMIs of key servers where fast recovery is required.
  4. Regularly run these servers, test them, and apply any software updates and configuration changes.
  5. Consider automating the provisioning of AWS resources.
Pilot Light Scenario - Preparation Phase

Recovery Phase steps :

  1. Start the application EC2 instances from your custom AMIs.
  2. Resize existing database/data store instances to process the increased traffic for e.g. If using RDS, it can be easily scaled vertically while EC2 instances can be easily scaled horizontally
  3. Add additional database/data store instances to give the DR site resilience in the data tier for e.g. turn on Multi-AZ for RDS to improve resilience.
  4. Change DNS to point at the Amazon EC2 servers.
  5. Install and configure any non-AMI based systems, ideally in an automated way.

Pilot Light Scenario - Recovery Phase

Warm Standby

  • In a Warm standby DR scenario a scaled-down version of a fully functional environment identical to the business critical systems is always running in the cloud
  • This setup can be used for testing, quality assurances or for internal use.
  • In case of an disaster, the system can be easily scaled up or out to handle production load.

Preparation phase steps :

  1. Set up Amazon EC2 instances to replicate or mirror data.
  2. Create and maintain AMIs for faster provisioning
  3. Run the application using a minimal footprint of EC2 instances or AWS infrastructure.
  4. Patch and update software and configuration files in line with your live environment.

Warm Standby - Preparation PhaseRecovery phase Steps:

  1. Increase the size of the Amazon EC2 fleets in service with the load balancer (horizontal scaling).
  2. Start applications on larger Amazon EC2 instance types as needed (vertical scaling).
  3. Either manually change the DNS records, or use Route 53 automated health checks to route all the traffic to the AWS environment.
  4. Consider using Auto Scaling to right-size the fleet or accommodate the increased load.
  5. Add resilience or scale up your database to guard against DR going down

Warm Standby - Recovery Phase

Multi-Site

  • Multi-Site is an active-active configuration DR approach, where in an identical solution runs on AWS as your on-site infrastructure.
  • Traffic can be equally distributed to both the infrastructure as needed by using DNS service weighted routing approach.
  • In case of a disaster the DNS can be tuned to send all the traffic to the AWS environment and the AWS infrastructure scaled accordingly.

Preparation phase steps :

  1. Set up your AWS environment to duplicate the production environment.
  2. Set up DNS weighting, or similar traffic routing technology, to distribute incoming requests to both sites.
  3. Configure automated failover to re-route traffic away from the affected site. for e.g. application to check if primary DB is available if not then redirect to the AWS DB
Multi-Site - Preparation Phase

Recovery phase steps :

  1. Either manually or by using DNS failover, change the DNS weighting so that all requests are sent to the AWS site.
  2. Have application logic for failover to use the local AWS database servers for all queries.
  3. Consider using Auto Scaling to automatically right-size the AWS fleet.Multi-Site - Recovery Phase

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of these Disaster Recovery options costs the least?
    1. Pilot Light (most systems are down and brought up only after disaster)
    2. Fully Working Low capacity Warm standby
    3. Multi site Active-Active
  2. Your company currently has a 2-tier web application running in an on-premises data center. You have experienced several infrastructure failures in the past two months resulting in significant financial losses. Your CIO is strongly agreeing to move the application to AWS. While working on achieving buy-in from the other company executives, he asks you to develop a disaster recovery plan to help improve Business continuity in the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective (RPO) of 1 hour or less. He also asks you to implement the solution within 2 weeks. Your database is 200GB in size and you have a 20Mbps Internet connection. How would you do this while minimizing costs?
    1. Create an EBS backed private AMI which includes a fresh install or your application. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3 bucket using multi-part upload (while AMI is a right approach to keep cost down, Upload to S3 very Slow)
    2. Install your application on a compute-optimized EC2 instance capable of supporting the application’s average load synchronously replicate transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection. (EC2 running in Compute Optimized as well as Direct Connect is expensive to start with also Direct Connect cannot be implemented in 2 weeks)
    3. Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection. (While VPN can be setup quickly asynchronous replication using VPN would work, running instances in DR is expensive)
    4. Create an EBS backed private AMI that includes a fresh install of your application. Develop a Cloud Formation template which includes your AMI and the required EC2. Auto-Scaling and ELB resources to support deploying the application across Multiple Availability Zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection. (Pilot Light approach with only DB running and replicate while you have preconfigured AMI and autoscaling config)
  3. You are designing an architecture that can recover from a disaster very quickly with minimum down time to the end users. Which of the following approaches is best?
    1. Leverage Route 53 health checks to automatically fail over to backup site when the primary site becomes unreachable
    2. Implement the Pilot Light DR architecture so that traffic can be processed seamlessly in case the primary site becomes unreachable
    3. Implement either Fully Working Low Capacity Standby or Multi-site Active-Active architecture so that the end users will not experience any delay even if the primary site becomes unreachable
    4. Implement multi-region architecture to ensure high availability
  4. Your customer wishes to deploy an enterprise application to AWS that will consist of several web servers, several application servers and a small (50GB) Oracle database. Information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery, whole server and whole disk restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database. Which backup architecture will meet these requirements?
    1. Backup RDS using automated daily DB backups. Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore (RDS automated backups with file-level backups can be used)
    2. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using AMIs, and supplement by copying file system data to S3 to provide file level restore (Multi-AZ is more of an Disaster recovery solution)
    3. Backup RDS using automated daily DB backups. Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore (Glacier not an option with the 2 hours RTO)
    4. Backup RDS database to S3 using Oracle RMAN. Backup the EC2 instances using AMIs, and supplement with EBS snapshots for individual volume restore. (Will use RMAN only if Database hosted on EC2 and not when using RDS)
  5. Which statements are true about the Pilot Light Disaster recovery architecture pattern?
    1. Pilot Light is a hot standby (Cold Standby)
    2. Enables replication of all critical data to AWS
    3. Very cost-effective DR pattern
    4. Can scale the system as needed to handle current production load
  6. An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes. The customer realizes that data corruption occurred roughly 1.5 hours ago. What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure?
    1. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes
    2. Use synchronous database master-slave replication between two availability zones. (Replication won’t help to backtrack and would be sync always)
    3. Take hourly DB backups to EC2 Instance store volumes with transaction logs stored In S3 every 5 minutes. (Instance store not a preferred storage)
    4. Take 15 minute DB backups stored in Glacier with transaction logs stored in S3 every 5 minutes. (Glacier does not meet the RTO)
  7. Your company’s on-premises content management system has the following architecture:
    – Application Tier – Java code on a JBoss application server
    – Database Tier – Oracle database regularly backed up to Amazon Simple Storage Service (S3) using the Oracle RMAN backup utility
    – Static Content – stored on a 512GB gateway stored Storage Gateway volume attached to the application server via the iSCSI interfaceWhich AWS based disaster recovery strategy will give you the best RTO?

    1. Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Generate an EBS volume of static content from the Storage Gateway and attach it to the JBoss EC2 server.
    2. Deploy the Oracle database on RDS. Deploy the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon Glacier. Generate an EBS volume of static content from the Storage Gateway and attach it to the JBoss EC2 server. (Glacier does help to give best RTO)
    3. Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Restore the static content by attaching an AWS Storage Gateway running on Amazon EC2 as an iSCSI volume to the JBoss EC2 server. (No need to attach the Storage Gateway as an iSCSI volume can just create a EBS volume)
    4. Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Restore the static content from an AWS Storage Gateway-VTL running on Amazon EC2 (VTL is Virtual Tape library and doesn’t fit the RTO)

References