AWS Cloud Migration Services

AWS Cloud Migration Services

  • AWS Cloud Migration services help to address a lot of common use cases such as
    • cloud migration,
    • disaster recovery,
    • data center decommission, and
    • content distribution.
  • For migrating data from on-premises to AWS, the major aspect for consideration are
    • amount of data and network speed
    • data security in transit
    • existing application knowledge for recreation

Application & Database Cloud Migration Services

AWS Migration Hub

  • provides a centralized, single place to discover the existing servers, plan migrations, and track the status of each application migration.
  • provides visibility into the application portfolio and streamlines planning and tracking.
  • helps visualize the connections and the status of the migrating servers and databases, regardless of which migration tool is used.
  • stores all the data in the selected Home Region and provides a single repository of discovery and migration planning information for the entire portfolio and a single view of migrations into multiple AWS Regions.
  • helps track the status of the migrations in all AWS Regions, provided the migration tools are available in that Region.
  • helps understand the environment by letting you explore information collected by AWS discovery tools and stored in the AWS Application Discovery Service’s repository.
  • supports migration status updates from the following tools:
  • migration tools send migration status to the selected Home Region
  • supports EC2 instance recommendations, that provide you with the ability to estimate the cost of running the existing servers in AWS.
  • supports Strategy Recommendations, that help easily build a migration and modernization strategy for the applications running on-premises or in AWS.

AWS Application Discovery Service

  • AWS Application Discovery Service helps plan migration to the AWS cloud by collecting usage and configuration data about the on-premises servers.
  • helps enterprises obtain a snapshot of the current state of their data center servers by collecting server specification information, hardware configuration, performance data, details of running processes, and network connections
  • is integrated with AWS Migration Hub,
    • which simplifies migration tracking as it aggregates migration status information into a single console.
    • can help view the discovered servers, group them into applications, and then track the migration status of each application.
  • discovered data for all the regions is stored in the AWS Migration Hub home Region.
  • The data can be exported for analysis in Microsoft Excel or AWS analysis tools such as Amazon Athena and Amazon QuickSight.
  • supports both agent and agentless-based on-premises tooling, in addition to file-based import for performing discovery and collecting data about the on-premises servers.

AWS Server Migration Service (SMS)

  • is an agentless service that makes it easier and faster to migrate thousands of on-premises workloads to AWS.
  • helps automate, schedule, and track incremental replications of live server volumes, making it easier to coordinate large-scale server migrations.
  • currently supports migration of virtual machines from VMware vSphere,  Windows Hyper-V and Azure VM to AWS
  • supports migrating Windows Server 2003, 2008, 2012, and 2016, and Windows 7, 8, and 10; Red Hat Enterprise Linux (RHEL), SUSE/SLES, CentOS, Ubuntu, Oracle Linux, Fedora, and Debian Linux OS
  • replicates each server volume, which is saved as a new AMI, which can be launched as an EC2 instance
  • is a significant enhancement of EC2 VM Import/Export service
  • is used to Re-host

AWS Database Migration Service (DMS)

  • helps migrate databases to AWS quickly and securely.
  • source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
  • supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora.
  • monitors for replication tasks, network or host failures, and automatically provisions a host replacement in case of failures that can’t be repaired
  • supports both one-time data migration into RDS and EC2-based databases as well as for continuous data replication
  • supports continuous replication of the data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3
  • provides free AWS Schema Conversion Tool (SCT) that automates the conversion of Oracle PL/SQL and SQL Server T-SQL code to equivalent code in the Amazon Aurora / MySQL dialect of SQL or the equivalent PL/pgSQL code in PostgreSQL

AWS EC2 VM Import/Export

  • allows easy import of virtual machine images from existing environment to EC2 instances and export them back to on-premises environment
  • allows leveraging of existing investments in the virtual machines, built to meet compliance requirements, configuration management and IT security by bringing those virtual machines into EC2 as ready-to-use instances
  • Common usages include
    • Migrate Existing Applications and Workloads to EC2, allowing preserving of the software and settings configured in the existing VMs.
    • Copy Your VM Image Catalog to EC2
    • Create a Disaster Recovery Repository for your VM images

Data Transfer Services

VPN

  • connection utilizes IPSec to establish encrypted network connectivity between on-premises network and VPC over the Internet.
  • connections can be configured in minutes and a good solution for an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
  • still requires internet and be configured using VGW and CGW

AWS Direct Connect

  • provides a dedicated physical connection between the corporate network and AWS Direct Connect location with no data transfer over the Internet.
  • helps bypass Internet service providers (ISPs) in the network path
  • helps reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than with Internet-based connection
  • takes time to setup and involves third parties
  • are not redundant and would need another direct connect connection or a VPN connection
  •  Security
    • provides a dedicated physical connection without internet
    • For additional security can be used with VPN

AWS Import/Export (upgraded to Snowball)

  • accelerates moving large amounts of data into and out of AWS using secure Snowball appliances
  • AWS transfers the data directly onto and off of the storage devices using Amazon’s high-speed internal network, bypassing the Internet
  • Data Migration
    • for significant data size, AWS Import/Export is faster than Internet transfer is and more cost-effective than upgrading the connectivity
    • if loading the data over the Internet would take a week or more, AWS Import/Export should be considered
    • data from appliances can be imported to S3, Glacier and EBS volumes and exported from S3
    • not suitable for applications that cannot tolerate offline transfer time
  •  Security
    • Snowball uses an industry-standard Trusted Platform Module (TPM) that has a dedicated processor designed to detect any unauthorized modifications to the hardware, firmware, or software to physically secure the AWS Snowball device.

Snow Family

  • AWS Snowball
    • is a petabyte-scale data transfer service built around a secure suitcase-sized device that moves data into and out of the AWS Cloud quickly and efficiently.
    • transfers the data to S3 bucket
    • transfer times are about a week from start to finish.
    • are commonly used to ship terabytes or petabytes of analytics data, healthcare and life sciences data, video libraries, image repositories, backups, and archives as part of data center shutdown, tape replacement, or application migration projects.
  • AWS Snowball Edge devices
    • contain slightly larger capacity and an embedded computing platform that helps perform simple processing tasks.
    • can be rack shelved and may also be clustered together, making it simpler to collect and store data in extremely remote locations.
    • commonly used in environments with intermittent connectivity (such as manufacturing, industrial, and transportation); or in extremely remote locations (such as military or maritime operations) before shipping them back to AWS data centers.
    • delivers serverless computing applications at the network edge using AWS Greengrass and Lambda functions.
    • common use cases include capturing IoT sensor streams, on-the-fly media transcoding, image compression, metrics aggregation and industrial control signaling and alarming.
  • AWS Snowmobile
    • moves up to 100PB of data (equivalent to 1,250 AWS Snowball devices) in a 45-foot long ruggedized shipping container and is ideal for multi-petabyte or Exabyte-scale digital media migrations and datacenter shutdowns.
    • arrives at the customer site and appears as a network-attached data store for more secure, high-speed data transfer. After data is transferred to Snowmobile, it is driven back to an AWS Region where the data is loaded into S3.
    • is tamper-resistant, waterproof, and temperature controlled with multiple layers of logical and physical security — including encryption, fire suppression, dedicated security personnel, GPS tracking, alarm monitoring, 24/7 video surveillance, and an escort security vehicle during transit.

AWS Storage Gateway

  • connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and the AWS storage infrastructure
  • provides low-latency performance by maintaining frequently accessed data on-premises while securely storing all of the data encrypted in S3 or Glacier.
  • for disaster recovery scenarios, Storage Gateway, together with EC2, can serve as a cloud-hosted solution that mirrors the entire production environment
  • Data Migration
    • with gateway-cached volumes, S3 can be used to hold primary data while frequently accessed data is cached locally for faster access reducing the need to scale on premises storage infrastructure
    • with gateway-stored volumes, entire data is stored locally while asynchronously backing up data to S3
    • with gateway-VTL, offline data archiving can be performed by presenting existing backup application with an iSCSI-based VTL consisting of a virtual media changer and virtual tape drives
  •  Security
    • Encrypts all data in transit to and from AWS by using SSL/TLS.
    • All data in AWS Storage Gateway is encrypted at rest using AES-256.
    • Authentication between the gateway and iSCSI initiators can be secured by using Challenge-Handshake Authentication Protocol (CHAP).

Simple Storage Service – S3

  • Data Transfer
    • Files up to 5GB can be transferred using single operation
    • Multipart uploads can be used to upload files up to 5 TB and speed up data uploads by dividing the file into multiple parts
    • transfer rate still limited by the network speed
  •  Security
    • Data in transit can be secured by using SSL/TLS or client-side encryption.
    • Encrypt data at-rest by performing server-side encryption using Amazon S3-Managed Keys (SSE-S3), AWS Key Management Service (KMS)-Managed Keys (SSE-KMS), or Customer Provided Keys (SSE-C). Or by performing client-side encryption using AWS KMS–Managed Customer Master Key (CMK) or Client-Side Master Key.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your must architect the migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location. What options could you select to migrate the application to AWS? (Choose 2)
    1. Create an AWS Elastic Beanstalk application using the custom web server platform. Specify the web server executable and the application project and source files. Enable log file rotation to Amazon Simple Storage Service (S3). (EB does not work with Custom server executable)
    2. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a custom layer. Create custom recipes to install Docker and to deploy your Docker container using the Dockerfile. Create custom recipes to install and configure the application to publish the logs to Amazon CloudWatch Logs (although this is one of the option, the last sentence mentions configure the application to push the logs to S3, which would need changes to application as it needs to use SDK or CLI)
    3. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a Docker layer that uses the Dockerfile. Create custom recipes to install and configure Amazon Kinesis to publish the logs into Amazon CloudWatch. (Kinesis not needed)
    4. Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Use Docker configuration with awslogs and EB with Docker)
    5. Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI. (Use VM Import/Export to create AMI and CloudWatch logs agent to log)
  2. Your company hosts an on-premises legacy engineering application with 900GB of data shared via a central file server. The engineering data consists of thousands of individual files ranging in size from megabytes to multiple gigabytes. Engineers typically modify 5-10 percent of the files a day. Your CTO would like to migrate this application to AWS, but only if the application can be migrated over the weekend to minimize user downtime. You calculate that it will take a minimum of 48 hours to transfer 900GB of data using your company’s existing 45-Mbps Internet connection. After replicating the application’s environment in AWS, which option will allow you to move the application’s data to AWS without losing any data and within the given timeframe?
    1. Copy the data to Amazon S3 using multiple threads and multi-part upload for large files over the weekend, and work in parallel with your developers to reconfigure the replicated application environment to leverage Amazon S3 to serve the engineering files. (Still limited by 45 Mbps speed with minimum 48 hours when utilized to max)
    2. Sync the application data to Amazon S3 starting a week before the migration, on Friday morning perform a final sync, and copy the entire data set to your AWS file server after the sync completes. (Works best as the data changes can be propagated over the week and are fractional and downtime would be know)
    3. Copy the application data to a 1-TB USB drive on Friday and immediately send overnight, with Saturday delivery, the USB drive to AWS Import/Export to be imported as an EBS volume, mount the resulting EBS volume to your AWS file server on Sunday. (Downtime is not known when the data upload would be done, although Amazon says the same day the package is received)
    4. Leverage the AWS Storage Gateway to create a Gateway-Stored volume. On Friday copy the application data to the Storage Gateway volume. After the data has been copied, perform a snapshot of the volume and restore the volume as an EBS volume to be attached to your AWS file server on Sunday. (Still uses the internet)
  3. You are tasked with moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there’s no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose 3 answers)
    1. An AWS Direct Connect link between the VPC and the network housing the internal services
    2. An Internet Gateway to allow a VPN connection. (Virtual and Customer gateway is needed)
    3. An Elastic IP address on the VPC instance
    4. An IP address space that does not conflict with the one on-premises
    5. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies’ IP addresses
    6. A VM Import of the current virtual machine
  4. An enterprise runs 103 line-of-business applications on virtual machines in an on-premises data center. Many of the applications are simple PHP, Java, or Ruby web applications, are no longer actively developed, and serve little traffic. Which approach should be used to migrate these applications to AWS with the LOWEST infrastructure costs?
    1. Deploy the applications to single-instance AWS Elastic Beanstalk environments without a load balancer.
    2. Use AWS SMS to create AMIs for each virtual machine and run them in Amazon EC2.
    3. Convert each application to a Docker image and deploy to a small Amazon ECS cluster behind an Application Load Balancer.
    4. Use VM Import/Export to create AMIs for each virtual machine and run them in single-instance AWS Elastic Beanstalk environments by configuring a custom image.

References

AWS Storage Gateway

AWS Storage Gateway

  • AWS Storage Gateway connects on-premises software appliances with cloud-based storage to provide seamless integration with data security features between on-premises and the AWS storage infrastructure.
  • AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage.
  • Storage Gateway allows storage of data in the AWS cloud for scalable and cost-effective storage while maintaining data security.
  • Storage Gateway can run either on-premises, as a VM appliance, or in AWS, as an EC2 instance. So if the on-premises data center goes offline and there is no available host, the gateway can be deployed on an EC2 instance.
  • Gateways hosted on EC2 instances can be used for disaster recovery, data mirroring, and providing storage for applications hosted on EC2
  • Storage Gateway, by default, uploads data using SSL and provides data encryption at rest when stored in S3 or Glacier using AES-256
  • Storage Gateway performs encryption of data-in-transit and at-rest.
  • Storage Gateway offers multiple types
    • File Gateway
    • Volume-based Gateway
    • Tape-based

S3 File Gateway

  • supports a file interface into S3 and combines service and a virtual software appliance.
  • allows storing and retrieving of objects in S3 using industry-standard file protocols such as NFS and SMB.
  • Software appliance, or gateway, is deployed into the on-premises environment as a VM running on VMware ESXi or Microsoft Hyper-V hypervisor.
  • provides access to objects in S3 as files or file share mount points. It can be considered as a file system mount on S3.
  • durably stores POSIX-style metadata, including ownership, permissions, and timestamps in S3 as object user metadata associated with the file.
  • provides a cost-effective alternative to on-premises storage.
  • provides low-latency access to data through transparent local caching.
  • manages data transfer to and from AWS, buffers applications from network congestion, optimizes and streams data in parallel, and manages bandwidth consumption.
  • easily integrates with services like IAM, KMS, CloudWatch, CloudTrail, etc.
  • File Gateway allows you to
    • store and retrieve files directly using the NFS version 3 or 4.1 protocol.
    • store and retrieve files directly using the SMB file system version, 2 and 3 protocol.
    • access the data directly in S3 from any AWS Cloud application or service.
    • manage S3 data using lifecycle policies, cross-region replication, and versioning.

Volume Gateways

  • Volume gateways provide cloud-backed storage volumes that can be mounted as Internet Small Computer System Interface (iSCSI) devices from the on-premises application servers.
  • all data is securely stored in AWS, the approach differs from how much data is stored on-premises.
  • exposes compatible iSCSI interface on the front end to easily integrate with existing backup applications and represents another disk drive
  • backs up the data incrementally by taking snapshots which are stored as EBS snapshots in S3. These snapshots can be restored as gateway storage volume or used to create EBS volumes to be attached to an EC2 instance

Gateway Cached Volumes

Storage Gateway Cached Volume
  • Gateway Cached Volumes store data in S3, which acts as a primary data storage, and retains a copy of recently read data locally for low latency access to the frequently accessed data
  • Gateway-cached volumes offer substantial cost savings on primary storage and minimize the need to scale the storage on-premises.
  • All gateway-cached volume data and snapshot data are stored in S3 encrypted at rest using server-side encryption (SSE) and it cannot be accessed with S3 API or any other tools.
  • Each gateway configured for gateway-cached volumes can support up to 32 volumes, with each volume ranging from 1GiB to 32TiB, for a total maximum storage volume of 1,024 TiB (1 PiB).
  • Gateway VM can be allocated disks
    • Cache storage
      • Cache storage acts as the on-premises durable storage, stores the data before uploading it to S3
      • Cache storage also stores recently read data for low-latency access
    • Upload buffer
      • Upload buffer acts as a staging area before the data is uploaded to S3
      • Gateway uploads data over an encrypted Secure Sockets Layer (SSL) connection to AWS, where it is stored encrypted in S3

Gateway Stored Volumes

Storage Gateway Stored Volume
  • Gateway stored volumes maintain the entire data set locally to provide low-latency access.
  • Gateway asynchronously backs up point-in-time snapshots (in the form of EBS snapshots) of the data to S3 which provides durable off-site backups
  • Gateway stored volume configuration provides durable and inexpensive off-site backups that you can recover to your local data center or EC2 for e.g., if you need replacement capacity for disaster recovery, you can recover the backups to EC2.
  • Each gateway configured for gateway-stored volumes can support up to 12 32 volumes, ranging from 1GiB to 16TiB, and total volume storage of 192 TiB 512 TiB
  • Gateway VM can be allocated disks
    • Volume Storage
      • For storing the actual data
      • Can be mapped to on-premises direct-attached storage (DAS) or storage area network (SAN) disks
    • Upload buffer
      • Upload buffer acts as a staging area before the data is uploaded to S3
      • Gateway uploads data over an encrypted Secure Sockets Layer (SSL) connection to AWS, where it is stored encrypted in Amazon S3

Tape Gateway – Gateway-Virtual Tape Library (VTL)

Storage Gateway VTL
  • Tape Gateway offers a durable, cost-effective data archival solution.
  • VTL interface can help leverage existing tape-based backup application infrastructure to store data on virtual tape cartridges created on the tape gateway.
  • Each Tape Gateway is preconfigured with a media changer and tape drives, which are available to the existing client backup applications as iSCSI devices. Tape cartridges can be added as needed to archive the data.
  • Gateway-VTL provides a virtual tape infrastructure that scales seamlessly with the business needs and eliminates the operational burden of provisioning, scaling, and maintaining a physical tape infrastructure.
  • Gateway VTL has the following components:-
    • Virtual Tape
      • Virtual tape is similar to the physical tape cartridge, except that the data is stored in the AWS storage solution
      • Each gateway can contain 1500 tapes or up to 150 TiB 1 PiB of total tape data, with each tape ranging from 100 GiB to 2.5 TiB
    • Virtual Tape Library
      • Virtual tape library is similar to the physical tape library with tape drives (replaced with VTL tape drive) and robotic arms (replaced with Media changer)
      • Tapes in the Virtual tape library are backup in S3
      • Backup software writes data to the gateway, the gateway stores data locally, and then asynchronously uploads it to virtual tapes in S3.
    • Archive OR Virtual Tape Shelf
      • Virtual tape shelf is similar to the offsite tape holding facility
      • Tapes in the Virtual tape library are backup in Glacier providing an extremely low-cost storage service for data archiving and backup
      • VTS is located in the same region where the gateway was created and every region would have a single VTS irrespective of the number of gateways
      • Archiving tapes
        • When the backup software ejects a tape, the gateway moves the tape to the VTS for long term storage
      • Retrieving tapes
        • Tape can be retrieved from VTS only by first retrieving the tapes first to VTL and would be available in the VTL in about 24 hours
  • Gateway VM can be allocated disks for
    • Cache storage
      • Cache storage acts as the on-premises durable storage, stores the data before uploading it to S3.
      • Cache storage also stores recently read data for low-latency access
    • Upload buffer
      • Upload buffer acts as a staging area before the data is uploaded to the Virtual tape.
      • Gateway uploads data over an encrypted Secure Sockets Layer (SSL) connection to AWS, where it is stored encrypted in S3.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following services natively encrypts data at rest within an AWS region? Choose 2 answers
    1. AWS Storage Gateway
    2. Amazon DynamoDB
    3. Amazon CloudFront
    4. Amazon Glacier
    5. Amazon Simple Queue Service
  2. What does the AWS Storage Gateway provide?
    1. It allows to integrate on-premises IT environments with Cloud Storage
    2. A direct encrypted connection to Amazon S3.
    3. It’s a backup solution that provides an on-premises Cloud storage.
    4. It provides an encrypted SSL endpoint for backups in the Cloud.
  3. You’re running an application on-premises due to its dependency on non-x86 hardware and want to use AWS for data backup. Your backup application is only able to write to POSIX-compatible block-based storage. You have 140TB of data and would like to mount it as a single folder on your file server. Users must be able to access portions of this data while the backups are taking place. What backup solution would be most appropriate for this use case?
    1. Use Storage Gateway and configure it to use Gateway Cached volumes.
    2. Configure your backup software to use S3 as the target for your data backups.
    3. Configure your backup software to use Glacier as the target for your data backups
    4. Use Storage Gateway and configure it to use Gateway Stored volumes (Data is hosted on the On-premise server as well. The requirement for 140TB is for file server On-Premise more to confuse and not in AWS. Just need a backup solution hence stored instead of cached volumes)
  4. A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500 GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants an off-site backup of this data, while maintaining low-latency access to their frequently accessed data. Which AWS Storage Gateway configuration meets the customer requirements?
    1. Gateway-Cached volumes with snapshots scheduled to Amazon S3
    2. Gateway-Stored volumes with snapshots scheduled to Amazon S3
    3. Gateway-Virtual Tape Library with snapshots to Amazon S3
    4. Gateway-Virtual Tape Library with snapshots to Amazon Glacier
  5. You have a proprietary data store on-premises that must be backed up daily by dumping the data store contents to a single compressed 50GB file and sending the file to AWS. Your SLAs state that any dump file backed up within the past 7 days can be retrieved within 2 hours. Your compliance department has stated that all data must be held indefinitely. The time required to restore the data store from a backup is approximately 1 hour. Your on-premise network connection is capable of sustaining 1gbps to AWS. Which backup methods to AWS would be most cost-effective while still meeting all of your requirements?
    1. Send the daily backup files to Glacier immediately after being generated (will not meet the RTO)
    2. Transfer the daily backup files to an EBS volume in AWS and take daily snapshots of the volume (Not cost effective)
    3. Transfer the daily backup files to S3 and use appropriate bucket lifecycle policies to send to Glacier (Store in S3 for seven days and then archive to Glacier)
    4. Host the backup files on a Storage Gateway with Gateway-Cached Volumes and take daily snapshots (Not Cost effective as local storage as well as S3 storage)
  6. A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office. An event takes the link between the main and branch office offline. Which methods will enable the branch office to access their data? Choose 3 answers
    1. Use a HTTPS GET to the Amazon S3 bucket where the files are located (gateway volumes are only accessible from the AWS Storage Gateway and cannot be directly accessed using Amazon S3 APIs)
    2. Restore by implementing a lifecycle policy on the Amazon S3 bucket.
    3. Make an Amazon Glacier Restore API call to load the files into another Amazon S3 bucket within four to six hours.
    4. Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot
    5. Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance.
    6. Launch an AWS Storage Gateway virtual iSCSI device at the branch office, and restore from a gateway snapshot
  7. A company uses on-premises servers to host its applications. The company is running out of storage capacity. The applications use
    both block storage and NFS storage. The company needs a high-performing solution that supports local caching without rearchitecting
    its existing applications.Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
    1. Mount Amazon S3 as a file system to the on-premises servers.
    2. Deploy an AWS Storage Gateway file gateway to replace NFS storage.
    3. Deploy AWS Snowball Edge to provision NFS mounts to on-premises servers.
    4. Deploy an AWS Storage Gateway volume gateway to replace the block storage.
    5. Deploy Amazon Elastic File System (Amazon EFS) volumes and mount them to on-premises servers.

References

  1. AWS_Storage_Gateway_User_Guide
https://www.youtube.com/watch?v=AkehuRl5YPg