AWS Migration Hub provides a centralized, single place to discover the existing servers, plan migrations, and track the status of each application migration.
provides visibility into the application portfolio and streamlines planning and tracking.
helps visualize the connections and the status of the migrating servers and databases, regardless of which migration tool is used.
stores all the data in the selected Home Region and provides a single repository of discovery and migration planning information for the entire portfolio and a single view of migrations into multiple AWS Regions.
helps track the status of the migrations in all AWS Regions, provided the migration tools are available in that Region.
helps understand the environment by letting you explore information collected by AWS discovery tools and stored in the AWS Application Discovery Service’s repository.
supports migration status updates from the following tools:
migration tools send migration status to the selected Home Region
supports EC2 instance recommendations, that provide you with the ability to estimate the cost of running the existing servers in AWS.
supports Strategy Recommendations, that help easily build a migration and modernization strategy for the applications running on-premises or in AWS.
Migration Hub’s Strategy Recommendations
AWS Migration Hub’s Strategy Recommendations help easily build a migration and modernization strategy for the applications running on-premises or in AWS.
Strategy Recommendations provides guidance on the strategy and tools that help you migrate and modernize at scale.
Strategy Recommendations supports analysis for potential rehost (EC2) and replatform (managed environments such as RDS and Elastic BeanStalk, Containers, and OS upgrades) options for applications running on Windows Server 2003 or above or a wide variety of Linux distributions, including Ubuntu, RedHat, Oracle Linux, Debian, and Fedora.
Strategy Recommendations offers additional refactor analysis for custom applications written in C# and Java, and licensed databases (such as Microsoft SQL Server and Oracle).
EC2 Instance Recommendations
EC2 instance recommendations help analyze the data collected from each on-premises server, including server specification, CPU, and memory utilization, to recommend the most cost-effective, least expensive EC2 instance required to run the on-premises workload.
EC2 instance recommendations can be fine-tuned by specifying preferences for AWS purchasing options, AWS Region, EC2 instance type exclusions, and CPU/RAM utilization metric (average, peak, or percentile).
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
A company wants to migrate its on-premises data center to the AWS Cloud. This includes thousands of virtualized Linux and Microsoft Windows servers, SAN storage, Java and PHP applications with MYSQL, and Oracle databases. There are many department services hosted either in the same data center or externally.
The technical documentation is incomplete and outdated. A solutions architect needs to understand the current environment and estimate the cloud resource costs after the migration.
Which tools or services should be used to plan the cloud migration (Choose TWO.)
provides a centralized, single place to discover the existing servers, plan migrations, and track the status of each application migration.
provides visibility into the application portfolio and streamlines planning and tracking.
helps visualize the connections and the status of the migrating servers and databases, regardless of which migration tool is used.
stores all the data in the selected Home Region and provides a single repository of discovery and migration planning information for the entire portfolio and a single view of migrations into multiple AWS Regions.
helps track the status of the migrations in all AWS Regions, provided the migration tools are available in that Region.
helps understand the environment by letting you explore information collected by AWS discovery tools and stored in the AWS Application Discovery Service’s repository.
supports migration status updates from the following tools:
AWS Application Discovery Service helps plan migration to the AWS cloud by collecting usage and configuration data about the on-premises servers.
helps enterprises obtain a snapshot of the current state of their data center servers by collecting server specification information, hardware configuration, performance data, details of running processes, and network connections
which simplifies migration tracking as it aggregates migration status information into a single console.
can help view the discovered servers, group them into applications, and then track the migration status of each application.
discovered data for all the regions is stored in the AWS Migration Hub home Region.
The data can be exported for analysis in Microsoft Excel or AWS analysis tools such as Amazon Athena and Amazon QuickSight.
supports both agent and agentless-based on-premises tooling, in addition to file-based import for performing discovery and collecting data about the on-premises servers.
AWS Server Migration Service (SMS)
is an agentless service that makes it easier and faster to migrate thousands of on-premises workloads to AWS.
helps automate, schedule, and track incremental replications of live server volumes, making it easier to coordinate large-scale server migrations.
currently supports migration of virtual machines from VMware vSphere, Windows Hyper-V and Azure VM to AWS
supports migrating Windows Server 2003, 2008, 2012, and 2016, and Windows 7, 8, and 10; Red Hat Enterprise Linux (RHEL), SUSE/SLES, CentOS, Ubuntu, Oracle Linux, Fedora, and Debian Linux OS
replicates each server volume, which is saved as a new AMI, which can be launched as an EC2 instance
is a significant enhancement of EC2 VM Import/Export service
helps migrate databases to AWS quickly and securely.
source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora.
monitors for replication tasks, network or host failures, and automatically provisions a host replacement in case of failures that can’t be repaired
supports both one-time data migration into RDS and EC2-based databases as well as for continuous data replication
supports continuous replication of the data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3
provides free AWS Schema Conversion Tool (SCT) that automates the conversion of Oracle PL/SQL and SQL Server T-SQL code to equivalent code in the Amazon Aurora / MySQL dialect of SQL or the equivalent PL/pgSQL code in PostgreSQL
AWS EC2 VM Import/Export
allows easy import of virtual machine images from existing environment to EC2 instances and export them back to on-premises environment
allows leveraging of existing investments in the virtual machines, built to meet compliance requirements, configuration management and IT security by bringing those virtual machines into EC2 as ready-to-use instances
Common usages include
Migrate Existing Applications and Workloads to EC2, allowing preserving of the software and settings configured in the existing VMs.
Copy Your VM Image Catalog to EC2
Create a Disaster Recovery Repository for your VM images
connection utilizes IPSec to establish encrypted network connectivity between on-premises network and VPC over the Internet.
connections can be configured in minutes and a good solution for an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
still requires internet and be configured using VGW and CGW
provides a dedicated physical connection between the corporate network and AWS Direct Connect location with no data transfer over the Internet.
helps bypass Internet service providers (ISPs) in the network path
helps reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than with Internet-based connection
takes time to setup and involves third parties
are not redundant and would need another direct connect connection or a VPN connection
Security
provides a dedicated physical connection without internet
For additional security can be used with VPN
AWS Import/Export (upgraded to Snowball)
accelerates moving large amounts of data into and out of AWS using secure Snowball appliances
AWS transfers the data directly onto and off of the storage devices using Amazon’s high-speed internal network, bypassing the Internet
Data Migration
for significant data size, AWS Import/Export is faster than Internet transfer is and more cost-effective than upgrading the connectivity
if loading the data over the Internet would take a week or more, AWS Import/Export should be considered
data from appliances can be imported to S3, Glacier and EBS volumes and exported from S3
not suitable for applications that cannot tolerate offline transfer time
Security
Snowball uses an industry-standard Trusted Platform Module (TPM) that has a dedicated processor designed to detect any unauthorized modifications to the hardware, firmware, or software to physically secure the AWS Snowball device.
is a petabyte-scale data transfer service built around a secure suitcase-sized device that moves data into and out of the AWS Cloud quickly and efficiently.
transfers the data to S3 bucket
transfer times are about a week from start to finish.
are commonly used to ship terabytes or petabytes of analytics data, healthcare and life sciences data, video libraries, image repositories, backups, and archives as part of data center shutdown, tape replacement, or application migration projects.
AWS Snowball Edge devices
contain slightly larger capacity and an embedded computing platform that helps perform simple processing tasks.
can be rack shelved and may also be clustered together, making it simpler to collect and store data in extremely remote locations.
commonly used in environments with intermittent connectivity (such as manufacturing, industrial, and transportation); or in extremely remote locations (such as military or maritime operations) before shipping them back to AWS data centers.
delivers serverless computing applications at the network edge using AWS Greengrass and Lambda functions.
common use cases include capturing IoT sensor streams, on-the-fly media transcoding, image compression, metrics aggregation and industrial control signaling and alarming.
AWS Snowmobile
moves up to 100PB of data (equivalent to 1,250 AWS Snowball devices) in a 45-foot long ruggedized shipping container and is ideal for multi-petabyte or Exabyte-scale digital media migrations and datacenter shutdowns.
arrives at the customer site and appears as a network-attached data store for more secure, high-speed data transfer. After data is transferred to Snowmobile, it is driven back to an AWS Region where the data is loaded into S3.
is tamper-resistant, waterproof, and temperature controlled with multiple layers of logical and physical security — including encryption, fire suppression, dedicated security personnel, GPS tracking, alarm monitoring, 24/7 video surveillance, and an escort security vehicle during transit.
connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and the AWS storage infrastructure
provides low-latency performance by maintaining frequently accessed data on-premises while securely storing all of the data encrypted in S3 or Glacier.
for disaster recovery scenarios, Storage Gateway, together with EC2, can serve as a cloud-hosted solution that mirrors the entire production environment
Data Migration
with gateway-cached volumes, S3 can be used to hold primary data while frequently accessed data is cached locally for faster access reducing the need to scale on premises storage infrastructure
with gateway-stored volumes, entire data is stored locally while asynchronously backing up data to S3
with gateway-VTL, offline data archiving can be performed by presenting existing backup application with an iSCSI-based VTL consisting of a virtual media changer and virtual tape drives
Security
Encrypts all data in transit to and from AWS by using SSL/TLS.
All data in AWS Storage Gateway is encrypted at rest using AES-256.
Authentication between the gateway and iSCSI initiators can be secured by using Challenge-Handshake Authentication Protocol (CHAP).
Files up to 5GB can be transferred using single operation
Multipart uploads can be used to upload files up to 5 TB and speed up data uploads by dividing the file into multiple parts
transfer rate still limited by the network speed
Security
Data in transit can be secured by using SSL/TLS or client-side encryption.
Encrypt data at-rest by performing server-side encryption using Amazon S3-Managed Keys (SSE-S3), AWS Key Management Service (KMS)-Managed Keys (SSE-KMS), or Customer Provided Keys (SSE-C). Or by performing client-side encryption using AWS KMS–Managed Customer Master Key (CMK) or Client-Side Master Key.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
Your must architect the migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location. What options could you select to migrate the application to AWS? (Choose 2)
Create an AWS Elastic Beanstalk application using the custom web server platform. Specify the web server executable and the application project and source files. Enable log file rotation to Amazon Simple Storage Service (S3). (EB does not work with Custom server executable)
Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a custom layer. Create custom recipes to install Docker and to deploy your Docker container using the Dockerfile. Create custom recipes to install and configure the application to publish the logs to Amazon CloudWatch Logs (although this is one of the option, the last sentence mentions configure the application to push the logs to S3, which would need changes to application as it needs to use SDK or CLI)
Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a Docker layer that uses the Dockerfile. Create custom recipes to install and configure Amazon Kinesis to publish the logs into Amazon CloudWatch. (Kinesis not needed)
Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Use Docker configuration with awslogs and EB with Docker)
Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI. (Use VM Import/Export to create AMI and CloudWatch logs agent to log)
Your company hosts an on-premises legacy engineering application with 900GB of data shared via a central file server. The engineering data consists of thousands of individual files ranging in size from megabytes to multiple gigabytes. Engineers typically modify 5-10 percent of the files a day. Your CTO would like to migrate this application to AWS, but only if the application can be migrated over the weekend to minimize user downtime. You calculate that it will take a minimum of 48 hours to transfer 900GB of data using your company’s existing 45-Mbps Internet connection. After replicating the application’s environment in AWS, which option will allow you to move the application’s data to AWS without losing any data and within the given timeframe?
Copy the data to Amazon S3 using multiple threads and multi-part upload for large files over the weekend, and work in parallel with your developers to reconfigure the replicated application environment to leverage Amazon S3 to serve the engineering files. (Still limited by 45 Mbps speed with minimum 48 hours when utilized to max)
Sync the application data to Amazon S3 starting a week before the migration, on Friday morning perform a final sync, and copy the entire data set to your AWS file server after the sync completes. (Works best as the data changes can be propagated over the week and are fractional and downtime would be know)
Copy the application data to a 1-TB USB drive on Friday and immediately send overnight, with Saturday delivery, the USB drive to AWS Import/Export to be imported as an EBS volume, mount the resulting EBS volume to your AWS file server on Sunday. (Downtime is not known when the data upload would be done, although Amazon says the same day the package is received)
Leverage the AWS Storage Gateway to create a Gateway-Stored volume. On Friday copy the application data to the Storage Gateway volume. After the data has been copied, perform a snapshot of the volume and restore the volume as an EBS volume to be attached to your AWS file server on Sunday. (Still uses the internet)
You are tasked with moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there’s no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose 3 answers)
An AWS Direct Connect link between the VPC and the network housing the internal services
An Internet Gateway to allow a VPN connection. (Virtual and Customer gateway is needed)
An Elastic IP address on the VPC instance
An IP address space that does not conflict with the one on-premises
Entries in Amazon Route 53 that allow the Instance to resolve its dependencies’ IP addresses
A VM Import of the current virtual machine
An enterprise runs 103 line-of-business applications on virtual machines in an on-premises data center. Many of the applications are simple PHP, Java, or Ruby web applications, are no longer actively developed, and serve little traffic. Which approach should be used to migrate these applications to AWS with the LOWEST infrastructure costs?
Deploy the applications to single-instance AWS Elastic Beanstalk environments without a load balancer.
Use AWS SMS to create AMIs for each virtual machine and run them in Amazon EC2.
Convert each application to a Docker image and deploy to a small Amazon ECS cluster behind an Application Load Balancer.
Use VM Import/Export to create AMIs for each virtual machine and run them in single-instance AWS Elastic Beanstalk environments by configuring a custom image.
AWS VPN connections are used to extend on-premises data centers to AWS.
VPN connections provide secure IPSec connections between the data center or branch office and the AWS resources.
AWS Site-to-Site VPN or AWS Hardware VPN or AWS Managed VPN
Connectivity can be established by creating an IPSec, hardware VPN connection between the VPC and the remote network.
On the AWS side of the VPN connection, a Virtual Private Gateway (VGW) provides two VPN endpoints for automatic failover.
On the customer side, a customer gateway (CGW) needs to be configured, which is the physical device or software application on the remote side of the VPN connection
AWS Client VPN is a managed client-based VPN service that enables secure access to AWS resources and resources in the on-premises network.
AWS VPN CloudHub
For more than one remote network e.g. multiple branch offices, multiple AWS hardware VPN connections can be created via the VPC to enable communication between these networks
AWS Software VPN
A VPN connection can be created to the remote network by using an EC2 instance in the VPC that’s running a third-party software VPN appliance.
AWS does not provide or maintain third-party software VPN appliances; however, there is a range of products provided by partners and open source communities.
AWS Direct Connect provides a dedicated private connection from a remote network to the VPC. Direct Connect can be combined with an AWS hardware VPN connection to create an IPsec-encrypted connection
VPN Components
Virtual Private Gateway – VGW
A virtual private gateway is the VPN concentrator on the AWS side of the VPN connection
Customer Gateway – CGW
A customer gateway is a physical device or software application on the customer side of the VPN connection.
When a VPN connection is created, the VPN tunnel comes up when traffic is generated from the remote side of the VPN connection.
By default, VGW is not the initiator; CGW must bring up the tunnels for the Site-to-Site VPN connection by generating traffic and initiating the Internet Key Exchange (IKE) negotiation process.
If the VPN connection experiences a period of idle time, usually 10 seconds, depending on the configuration, the tunnel may go down. To prevent this, a network monitoring tool to generate keepalive pings; for e.g. by using IP SLA.
Transit Gateway
A transit gateway is a transit hub that can be used to interconnect VPCs and on-premises networks.
A Site-to-Site VPN connection on a transit gateway can support either IPv4 traffic or IPv6 traffic inside the VPN tunnels.
A Site-to-Site VPN connection offers two VPN tunnels between a VGW or a transit gateway on the AWS side, and a CGW (which represents a VPN device) on the remote (on-premises) side.
VPN Routing Options
For a VPN connection, the route table for the subnets should be updated with the type of routing (static or dynamic) that you plan to use.
Route tables determine where network traffic is directed. Traffic destined for the VPN connections must be routed to the virtual private gateway.
The type of routing can depend on the make and model of the CGW device
Static Routing
If your device does not support BGP, specify static routing.
Using static routing, the routes (IP prefixes) can be specified that should be communicated to the virtual private gateway.
Devices that don’t support BGP may also perform health checks to assist failover to the second tunnel when needed.
BGP Dynamic Routing
If the VPN device supports Border Gateway Protocol (BGP), specify dynamic routing with the VPN connection.
When using a BGP device, static routes need not be specified to the VPN connection because the device uses BGP for auto-discovery and to advertise its routes to the virtual private gateway.
BGP-capable devices are recommended as the BGP protocol offers robust liveness detection checks that can assist failover to the second VPN tunnel if the first tunnel goes down.
Only IP prefixes known to the virtual private gateway, either through BGP advertisement or static route entry, can receive traffic from the VPC.
Virtual private gateway does not route any other traffic destined outside of the advertised BGP, static route entries, or its attached VPC CIDR.
VPN Route Priority
Longest prefix match applies.
If the prefixes are the same, then the VGW prioritizes routes as follows, from most preferred to least preferred:
BGP propagated routes from an AWS Direct Connect connection
Manually added static routes for a Site-to-Site VPN connection
BGP propagated routes from a Site-to-Site VPN connection
Prefix with the shortest AS PATH is preferred for matching prefixes where each Site-to-Site VPN connection uses BGP
Path with the lowest multi-exit discriminators (MEDs) value is preferred when the AS PATHs are the same length and if the first AS in the AS_SEQUENCE is the same across multiple paths.
VPN Limitations
supports only IPSec tunnel mode. Transport mode is currently not supported.
supports only one VGW can be attached to a VPC at a time.
does not support IPv6 traffic on a virtual private gateway.
does not support Path MTU Discovery.
does not support overlapping CIDR blocks for the networks. It is recommended to use non-overlapping CIDR blocks.
does not support transitive routing. So for traffic from on-premises to AWS via a virtual private gateway, it
does not support Internet connectivity through Internet Gateway
does not support Internet connectivity through NAT Gateway
does not support VPC Peered resources access through VPC Peering
However, Internet connectivity through NAT instance and VPC Interface Endpoint or PrivateLink services are accessible.
provides a bandwidth of 1.25 Gbps, currently.
VPN Monitoring
AWS Site-to-Site VPN automatically sends notifications to the AWS AWS Health Dashboard
AWS Site-to-Site VPN is integrated with CloudWatch with the following metrics available
TunnelState
The state of the tunnels.
For static VPNs, 0 indicates DOWN and 1 indicates UP.
For BGP VPNs, 1 indicates ESTABLISHED and 0 is used for all other states.
For both types of VPNs, values between 0 and 1 indicate at least one tunnel is not UP.
TunnelDataIn
The bytes received on the AWS side of the connection through the VPN tunnel from a customer gateway.
TunnelDataOut
The bytes sent from the AWS side of the connection through the VPN tunnel to the customer gateway.
VPN Connection Redundancy
A VPN connection is used to connect the customer network to a VPC.
Each VPN connection has two tunnels to help ensure connectivity in case one of the VPN connections becomes unavailable, with each tunnel using a unique virtual private gateway public IP address.
Both tunnels should be configured for redundancy.
When one tunnel becomes unavailable, for e.g. down for maintenance, network traffic is automatically routed to the available tunnel for that specific VPN connection.
To protect against a loss of connectivity in case the customer gateway becomes unavailable, a second VPN connection can be set up to the VPC and virtual private gateway by using a second customer gateway.
Customer gateway IP address for the second VPN connection must be publicly accessible.
By using redundant VPN connections and CGWs, maintenance on one of the customer gateways can be performed while traffic continues to flow over the second customer gateway’s VPN connection.
Dynamically routed VPN connections using the Border Gateway Protocol (BGP) are recommended, if available, to exchange routing information between the customer gateways and the virtual private gateways.
Statically routed VPN connections require static routes for the network to be entered on the customer gateway side.
BGP-advertised and statically entered route information allow gateways on both sides to determine which tunnels are available and reroute traffic if a failure occurs.
Multiple Site-to-Site VPN Connections
VPC has an attached virtual private gateway, and the remote network includes a customer gateway, which must be configured to enable the
VPN connection.
Routing must be set up so that any traffic from the VPC bound for the remote network is routed to the virtual private gateway.
Each VPN has two tunnels associated with it that can be configured on the customer router, as is not a single point of failure
Multiple VPN connections to a single VPC can be created, and a second CGW can be configured to create a redundant connection to the same external location or to create VPN connections to multiple geographic locations.
VPN CloudHub
VPN CloudHub can be used to provide secure communication between multiple on-premises sites if you have multiple VPN connections
VPN CloudHub operates on a simple hub-and-spoke model using a Virtual Private gateway in a detached mode that can be used without a VPC.
Design is suitable for customers with multiple branch offices and existing
Internet connections who’d like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices
VPN CloudHub architecture with blue dashed lines indicates network
traffic between remote sites being routed over their VPN connections.
AWS VPN CloudHub requires a virtual private gateway with multiple customer gateways.
Each customer gateway must use a unique Border Gateway Protocol (BGP) Autonomous System Number (ASN)
Customer gateways advertise the appropriate routes (BGP prefixes) over their VPN connections.
Routing advertisements are received and re-advertised to each BGP peer, enabling each site to send data to and receive data from the other sites.
Routes for each spoke must have unique ASNs and the sites must not have overlapping IP ranges.
Each site can also send and receive data from the VPC as if they were using a standard VPN connection.
Sites that use AWS Direct Connect connections to the virtual private gateway can also be part of the AWS VPN CloudHub.
To configure the AWS VPN CloudHub,
multiple customer gateways can be created, each with the unique public IP address of the gateway and the ASN.
a VPN connection can be created from each customer gateway to a common virtual private gateway.
each VPN connection must advertise its specific BGP routes. This is done using the network statements in the VPN configuration files for the VPN connection.
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
You have in total 5 offices, and the entire employee-related information is stored under AWS VPC instances. Now all the offices want to connect the instances in VPC using VPN. Which of the below help you to implement this?
you can have redundant customer gateways between your data center and your VPC
you can have multiple locations connected to the AWS VPN CloudHub
You have to define 5 different static IP addresses in route table.
1 and 2
1,2 and 3
You have in total of 15 offices, and the entire employee-related information is stored under AWS VPC instances. Now all the offices want to connect the instances in VPC using VPN. What problem do you see in this scenario?
You can not create more than 1 VPN connections with single VPC (Can be created)
You can not create more than 10 VPN connections with single VPC (soft limit can be extended)
When you create multiple VPN connections, the virtual private gateway can not sends network traffic to the appropriate VPN connection using statically assigned routes. (Can route the traffic to correct connection)
Statically assigned routes cannot be configured in case of more than 1 VPN with the virtual private gateway. (can be configured)
None of above
You have been asked to virtually extend two existing data centers into AWS to support a highly available application that depends on existing, on-premises resources located in multiple data centers and static content that is served from an Amazon Simple Storage Service (S3) bucket. Your design currently includes a dual-tunnel VPN connection between your CGW and VGW. Which component of your architecture represents a potential single point of failure that you should consider changing to make the solution more highly available?
Add another VGW in a different Availability Zone and create another dual-tunnel VPN connection.
Add another CGW in a different data center and create another dual-tunnel VPN connection. (Refer link)
Add a second VGW in a different Availability Zone, and a CGW in a different data center, and create another dual-tunnel.
No changes are necessary: the network architecture is currently highly available.
You are designing network connectivity for your fat client application. The application is designed for business travelers who must be able to connect to it from their hotel rooms, cafes, public Wi-Fi hotspots, and elsewhere on the Internet. You do not want to publish the application on the Internet. Which network design meets the above requirements while minimizing deployment and operational costs? [PROFESSIONAL]
Implement AWS Direct Connect, and create a private interface to your VPC. Create a public subnet and place your application servers in it. (High Cost and does not minimize deployment)
Implement Elastic Load Balancing with an SSL listener that terminates the back-end connection to the application. (Needs to be published to internet)
Configure an IPsec VPN connection, and provide the users with the configuration details. Create a public subnet in your VPC, and place your application servers in it. (Instances still in public subnet are internet accessible)
Configure an SSL VPN solution in a public subnet of your VPC, then install and configure SSL VPN client software on all user computers. Create a private subnet in your VPC and place your application servers in it. (Cost effective and can be in private subnet as well)
You are designing a connectivity solution between on-premises infrastructure and Amazon VPC Your server’s on-premises will De communicating with your VPC instances You will De establishing IPSec tunnels over the internet You will be using VPN gateways and terminating the IPsec tunnels on AWS-supported customer gateways. Which of the following objectives would you achieve by implementing an IPSec tunnel as outlined above? (Choose 4 answers) [PROFESSIONAL]
End-to-end protection of data in transit
End-to-end Identity authentication
Data encryption across the Internet
Protection of data in transit over the Internet
Peer identity authentication between VPN gateway and customer gateway
Data integrity protection across the Internet
A development team that is currently doing a nightly six-hour build which is lengthening over time on-premises with a large and mostly under utilized server would like to transition to a continuous integration model of development on AWS with multiple builds triggered within the same day. However, they are concerned about cost, security and how to integrate with existing on-premises applications such as their LDAP and email servers, which cannot move off-premises. The development environment needs a source code repository; a project management system with a MySQL database resources for performing the builds and a storage location for QA to pick up builds from. What AWS services combination would you recommend to meet the development team’s requirements? [PROFESSIONAL]
A Bastion host Amazon EC2 instance running a VPN server for access from on-premises, Amazon EC2 for the source code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIP for the source code repository and project management system, Amazon SQL for a build queue, An Amazon Auto Scaling group of Amazon EC2 instances for performing builds and Amazon Simple Email Service for sending the build output. (Bastion is not for VPN connectivity also SES should not be used)
An AWS Storage Gateway for connecting on-premises software applications with cloud-based storage securely, Amazon EC2 for the resource code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, Amazon Simple Notification Service for a notification initiated build, An Auto Scaling group of Amazon EC2 instances for performing builds and Amazon S3 for the build output. (Storage Gateway does provide secure connectivity but still needs VPN. SNS alone cannot handle builds)
An AWS Storage Gateway for connecting on-premises software applications with cloud-based storage securely, Amazon EC2 for the resource code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, Amazon SQS for a build queue, An Amazon Elastic Map Reduce (EMR) cluster of Amazon EC2 instances for performing builds and Amazon CloudFront for the build output. (Storage Gateway does not provide secure connectivity, still needs VPN. EMR is not ideal for performing builds as it needs normal EC2 instances)
A VPC with a VPN Gateway back to their on-premises servers, Amazon EC2 for the source-code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, SQS for a build queue, An Auto Scaling group of EC2 instances for performing builds and S3 for the build output. (VPN gateway is required for secure connectivity. SQS for build queue and EC2 for builds)
VPC Interface endpoints enable connectivity to services powered by AWS PrivateLink.
Services include AWS serviceslike CloudTrail, CloudWatch, etc., services hosted by other AWS customers and partners in their own VPCs (referred to as endpoint services), and supported AWS Marketplace partner services.
VPC Interface Endpoints only allow traffic from VPC resources to the endpoints and not vice versa
PrivateLink endpoints can be accessed across both intra- and inter-region VPC peering connections, Direct Connect, and VPN connections.
VPC Interface Endpoints, by default, have an address like vpce-svc-01234567890abcdef.us-east-1.vpce.amazonaws.com which needs application changes to point to the service.
Private DNS name feature allows consumers to use AWS service public default DNS names which would point to the private VPC endpoint service.
Interface Endpoints can be used to create custom applications in VPC and configure them as an AWS PrivateLink-powered service (referred to as an endpoint service) exposed through a Network Load Balancer.
Custom applications can be hosted within AWS or on-premises (via Direct Connect or VPN)
Interface Endpoints Configuration
Create an interface endpoint, and provide the name of the AWS service, endpoint service, or AWS Marketplace service
Choose the subnet to use the interface endpoint by creating an endpoint network interface.
An endpoint network interface is assigned a private IP address from the IP address range of the subnet and keeps this IP address until the interface endpoint is deleted
A private IP address also ensures the traffic remains private without any changes to the route table.
VPC Endpoint policy
VPC Endpoint policy is an IAM resource policy attached to an endpoint for controlling access from the endpoint to the specified service.
Endpoint policy, by default, allows full access to any user or service within the VPC, using credentials from any AWS account to any S3 resource; including S3 resources for an AWS account other than the account with which the VPC is associated
Endpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket policies).
Endpoint policy can be used to restrict which specific resources can be accessed using the VPC Endpoint.
1
2
3
4
5
6
7
8
9
10
11
12
13
{
"Sid":"AccessToSpecificBucket",
"Effect":"Allow",
"Principal":"*",
"Action":[
"s3:ListBucket",
"s3:GetObject",
],
"Resource":[
"arn:aws:s3:::example-bucket",
"arn:aws:s3:::example-bucket/*"
]
}
Interface Endpoint Limitations
For each interface endpoint, only one subnet per AZ can be selected.
Interface Endpoint supports TCP traffic only.
Endpoints are supported within the same region only.
Endpoints support IPv4 traffic only.
Each interface endpoint can support a bandwidth of up to 10 Gbps per AZ, by default, and automatically scales to 40 Gbps. Additional capacity may be added by reaching out to AWS support.
NACLs for the subnet can restrict traffic, and needs to be configured properly
Endpoints cannot be transferred from one VPC to another, or from one service to another.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
An application server needs to be in a private subnet without access to the internet. The solution must retrieve and upload data to an Amazon Kinesis. How should a Solutions Architect design a solution to meet these requirements?
A VPC Gateway Endpoint is a gateway that is a target for a specified route in the route table, used for traffic destined for a supported AWS service.
VPC Gateway Endpoints currently supports S3 and DynamoDB services
VPC Gateway Endpoints do not require an Internet gateway or a NAT device for the VPC.
Gateway endpoints do not enable AWS PrivateLink.
VPC Endpoint policy and Resource-based policies can be used for fine-grained access control.
Gateway Endpoint Configuration
Endpoint requires the VPC and the service to be accessed via the endpoint.
The endpoint needs to be associated with the Route table and the route table cannot be modified to remove the route entry. It can only be deleted by removing the Endpoint association with the Route table
A route is automatically added to the Route table with a destination that specifies the prefix list of service and the target with the endpoint id for e.g. A rule with destination pl-68a54001 (com.amazonaws.us-west-2.s3) and a target with this endpoints’ ID (e.g. vpce-12345678) will be added to the route tables
Access to the resources in other services can be controlled by endpoint policies
Security groups need to be modified to allow outbound traffic from the VPC to the service that is specified in the endpoint. Use the service prefix list ID for e.g. com.amazonaws.us-east-1.s3 as the destination in the outbound rule
Multiple endpoints can be created in a single VPC, for e.g., to multiple services.
Multiple endpoints can be created for the same service but in different route tables.
Multiple endpoints to the same service CAN NOT be specified in a single route table
Gateway Endpoint Limitations
are regional and supported within the same Region only.
cannot be created between a VPC and an AWS service in a different region.
support IPv4 traffic only.
cannot be transferred from one VPC to another, or from one service to another service.
connections cannot be extended out of a VPC i.e. resources across the VPN, VPC peering, Direct Connect connection cannot use the endpoint.
VPC Endpoint policy
VPC Endpoint policy is an IAM resource policy attached to an endpoint for controlling access from the endpoint to the specified service.
Endpoint policy, by default, allows full access to any user or service within the VPC, using credentials from any AWS account to any S3 resource; including S3 resources for an AWS account other than the account with which the VPC is associated
Endpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket policies).
Endpoint policy can be used to restrict which specific resources can be accessed using the VPC Endpoint.
1
2
3
4
5
6
7
8
9
10
11
12
13
{
"Sid":"AccessToSpecificBucket",
"Effect":"Allow",
"Principal":"*",
"Action":[
"s3:ListBucket",
"s3:GetObject",
],
"Resource":[
"arn:aws:s3:::example-bucket",
"arn:aws:s3:::example-bucket/*"
]
}
S3 Bucket Policies
IAM policy or bucket policy can’t be used to allow access from a VPC IPv4 CIDR range as the VPC CIDR blocks can be overlapping or identical, which might lead to unexpected results.
aws:SourceIp condition can’t be used in the IAM policies for requests to S3 through a VPC endpoint.
S3 Bucket Policies can be used to restrict access through the VPC endpoint only.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{
"Version":"2012-10-17",
"Id":"Access-to-bucket-using-specific-endpoint",
"Statement":[
{
"Sid":"Access-to-specific-VPCE-only",
"Effect":"Deny",
"Principal":"*",
"Action":"s3:*",
"Resource":["arn:aws:s3:::example_bucket",
"arn:aws:s3:::example_bucket/*"],
"Condition":{
"StringNotEquals":{
"aws:sourceVpce":"vpce-1a2b3c4d"
}
}
}
]
}
VPC Gateway Endpoint Troubleshooting
Verify the services are within the same region.
DNS resolution must be enabled in the VPC
Route table should have a route to S3 using the gateway VPC endpoint.
Security groups should have outbound traffic allowed VPC endpoint.
NACLs should allow inbound and outbound traffic.
Gateway Endpoint Policy should define access to the resource
Resource-based policies like the S3 bucket policy should allow access to the VPC endpoint or the VPC.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
You have an application running on an Amazon EC2 instance that uploads 10 GB video objects to amazon S3. Video uploads are taking longer than expected inspite of using multipart upload cause of internet bandwidth, resulting in poor application performance. Which action can help improve the upload performance?
Apply an Amazon S3 bucket policy
Use Amazon EBS provisioned IOPS
Use VPC endpoints for S3
Request a service limit increase
What are the services supported by VPC endpoints, using Gateway endpoint type? Choose 2 answers
Amazon S3
Amazon EFS
Amazon DynamoDB
Amazon Glacier
Amazon SQS
An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?
Access the data through an Internet Gateway.
Access the data through a VPN connection.
Access the data through a NAT Gateway.
Access the data through a VPC endpoint for Amazon S3.
VPC Endpoints enable the creation of a private connection between VPC to supported AWS services and VPC endpoint services powered by PrivateLink using its private IP address
Traffic between VPC and AWS service does not leave the Amazon network
Endpoints are virtual devices, that are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in the VPC and AWS services without imposing availability risks or bandwidth constraints on your network traffic.
Endpoints currently do not support cross-region requests, ensure that the endpoint is created in the same region as the S3 bucket
AWS currently supports the following types of Endpoints
VPC Interface endpoints enable connectivity to services powered by AWS PrivateLink.
Services include AWS serviceslike CloudTrail, CloudWatch, etc., services hosted by other AWS customers and partners in their own VPCs (referred to as endpoint services), and supported AWS Marketplace partner services.
Interface Endpoints only allow traffic from VPC resources to the endpoints and not vice versa
PrivateLink endpoints can be accessed across both intra- and inter-region VPC peering connections, Direct Connect, and VPN connections.
VPC Interface Endpoints, by default, have an address like vpce-svc-01234567890abcdef.us-east-1.vpce.amazonaws.com which needs application changes to point to the service.
Private DNS name feature allows consumers to use AWS service public default DNS names which would point to the private VPC endpoint service.
Interface Endpoints can be used to create custom applications in VPC and configure them as an AWS PrivateLink-powered service (referred to as an endpoint service) exposed through a Network Load Balancer.
Custom applications can be hosted within AWS or on-premises (via Direct Connect or VPN)
S3 VPC Endpoints Strategy
S3 is now accessible with both Gateway Endpoints and Interface Endpoints.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
You have an application running on an Amazon EC2 instance that uploads 10 GB video objects to amazon S3. Video uploads are taking longer than expected inspite of using multipart upload cause of internet bandwidth, resulting in poor application performance. Which action can help improve the upload performance?
Apply an Amazon S3 bucket policy
Use Amazon EBS provisioned IOPS
Use VPC endpoints for S3
Request a service limit increase
What are the services supported by VPC endpoints, using Gateway endpoint type? Choose 2 answers
Amazon S3
Amazon EFS
Amazon DynamoDB
Amazon Glacier
Amazon SQS
What are the different types of endpoint types supported by VPC endpoints? Choose 2 Answers
Gateway
Classic
Interface
Virtual
Network
An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?
Access the data through an Internet Gateway.
Access the data through a VPN connection.
Access the data through a NAT Gateway.
Access the data through a VPC endpoint for Amazon S3.
You need to design a VPC for a three-tier architecture, a web application consisting of an Elastic Load Balancer (ELB), a fleet of web/application servers, and a backend consisting of an RDS database. The entire Infrastructure must be distributed over 2 availability zones. Which VPC configuration works while assuring the least components are exposed to Internet?
Two public subnets for ELB, two private subnets for the web-servers, two private subnets for RDS and DynamoDB
Two public subnets for ELB and web-servers, two private subnets for RDS and DynamoDB
Two public subnets for ELB, two private subnets for the web-servers, two private subnets for RDS and VPC Endpoints for DynamoDB
Two public subnets for ELB and web-servers, two private subnets for RDS and VPC Endpoints for DynamoDB
A VPC peering connection is a networking connection between two VPCs that enables routing of traffic between them using private IPv4 addresses or IPv6 addresses.
VPC peering connection
can be established between your own VPCs, or with a VPC in another AWS account in the same or different region.
is a one-to-one relationship between two VPCs.
supports intra and inter-region peering connections.
With VPC peering,
Instances in either VPC can communicate with each other as if they are within the same network
AWS uses the existing infrastructure of a VPC to create a peering connection; it is neither a gateway nor a VPN connection and does not rely on a separate piece of physical hardware.
There is no single point of failure for communication or a bandwidth bottleneck
All inter-region traffic is encrypted with no single point of failure, or bandwidth bottleneck. Traffic always stays on the global AWS backbone, and never traverses the public internet, which reduces threats, such as common exploits, and DDoS attacks.
VPC peering does not have any separate charges. However, there are data transfer charges.
VPC Peering Connectivity
To create a VPC peering connection, the owner of the requester VPC sends a request to the owner of the accepted VPC.
Accepter VPC can be owned by the same account or a different AWS account.
Once the Accepter VPC accepts the peering connection request, the peering connection is activated.
Route tables on both the VPCs should be manually updated to allow traffic
Security groups on the instances should allow traffic to and from the peered VPCs.
VPC Peering Limitations & Rules
Does not support Overlapping or matching IPv4 or IPv6 CIDR blocks.
Does not support transitive peering relationships i.e. the VPC does not have access to any other VPCs that the peer VPC may be peered with even if established entirely within your own AWS account
Does not support Edge to Edge Routing Through a Gateway or Private Connection
In a VPC peering connection, the VPC does not have access to any other connection that the peer VPC may have and vice versa. Connections that the peer VPC can include
An Internet connection through an Internet gateway
An Internet connection in a private subnet through a NAT device
A ClassicLink connection to an EC2-Classic instance
A VPC endpoint to an AWS service; for example, an endpoint to S3.
VPC peering connections are limited on the number of active and pending peering connections that you can have per VPC.
Only one peering connection can be established between the same two VPCs at the same time.
Jumbo frames are supported for peering connections within the same region.
A placement group can span peered VPCs that are in the same region; however, you do not get full-bisection bandwidth between instances in peered VPCs
Inter-region VPC peering connections
The Maximum Transmission Unit (MTU) across an inter-region peering connection is 1500 bytes. Jumbo frames are not supported.
Security group rule that references a peer VPC security group cannot be created.
Any tags created for the peering connection are only applied in the account or region in which they were created
Unicast reverse path forwarding in peering connections is not supported
Circa July 2016, Instance’s Public DNS can now be resolved to its private IP address across peered VPCs. The instance’s public DNS hostname does not resolve to its private IP address across peered VPCs.
VPC Peering Troubleshooting
Verify that the VPC peering connection is in the Active state.
Be sure to update the route tables for the peering connection. Verify that the correct routes exist for connections to the IP address range of the peered VPCs through the appropriate gateway.
Verify that an ALLOW rule exists in the network access control (NACL) table for the required traffic.
Verify that the security group rules allow network traffic between the peered VPCs.
Verify using VPC flow logs that the required traffic isn’t rejected at the source or destination. This rejection might occur due to the permissions associated with security groups or network ACLs.
Be sure that no firewall rules block network traffic between the peered VPCs. Use network utilities such as traceroute (Linux) or tracert (Windows) to check rules for firewalls such as iptables (Linux) or Windows Firewall (Windows).
VPC Peering Architecture
VPC Peering can be applied to create shared services or perform authentication with an on-premises instance
This would help create a single point of contact, as well limiting the VPN connections to a single account or VPC
VPC Peering vs Transit Gateway
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
You currently have 2 development environments hosted in 2 different VPCs in an AWS account in the same region. There is now a need for resources from one VPC to access another. How can this be accomplished?
Establish a Direct Connect connection.
Establish a VPN connection.
Establish VPC Peering.
Establish Subnet Peering.
A company has an AWS account that contains three VPCs (Dev, Test, and Prod) in the same region. Test is peered to both Prod and Dev. All VPCs have non-overlapping CIDR blocks. The company wants to push minor code releases from Dev to Prod to speed up the time to market. Which of the following options helps the company accomplish this?
Create a new peering connection Between Prod and Dev along with appropriate routes.
Create a new entry to Prod in the Dev route table using the peering connection as the target.
Attach a second gateway to Dev. Add a new entry in the Prod route table identifying the gateway as the target.
The VPCs have non-overlapping CIDR blocks in the same account. The route tables contain local routes for all VPCs.
A company has 2 AWS accounts that have individual VPCs. The VPCs are in different AWS regions and need to communicate with each other. The VPCs have non-overlapping CIDR blocks. Which of the following would be a cost-effective connectivity option?
AWS Certified Solutions Architect – Professional (SAP-C02) Exam Learning Path
AWS Certified Solutions Architect – Professional (SAP-C02) exam is the upgraded pattern of the previous Solution Architect – Professional SAP-C01 exam and was released in Nov. 2022.
SAP-C02 is quite similar to SAP-C01 but has included some new services.
AWS Certified Solutions Architect – Professional (SAP-C02) Exam Content
AWS Certified Solutions Architect – Professional (SAP-C02) exam validates the ability to complete tasks within the scope of the AWS Well-Architected Framework
AWS Certified Solutions Architect – Professional (SAP-C02) Exam Summary
Professional exams are tough, lengthy, and tiresome. Most of the questions and answers options have a lot of prose and a lot of reading that needs to be done, so be sure you are prepared and manage your time well.
Each solution involves multiple AWS services.
AWS Certified Solutions Architect – Professional (SAP-C02) exam has 65 questions to be solved in 170 minutes.
SAP-C02 exam includes two types of questions, multiple-choice and multiple-response.
SAP-C02 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 750.
Each question mainly touches multiple AWS services.
Associate exams currently cost $ 300 + tax.
You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
As always, mark the questions for review and move on and come back to them after you are done with all.
As always, having a rough architecture or mental picture of the setup helps focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.
AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
AWS Certified Solutions Architect – Professional (SAP-C02) Exam Topics
AWS Certified Solutions Architect – Professional (SAP-C02) focuses a lot on concepts and services related to Architecture & Design, Scalability, High Availability, Disaster Recovery, Migration, Security, and Cost Control.
S3 Access Logs enable tracking access requests to an S3 bucket.
supports S3 Select feature to query selective data from a single object.
S3 Event Notification enables notifications to be triggered when certain events happen in the bucket and support SNS, SQS, and Lambda as the destination.
File Gateways provides a file interface into S3 and allows storing and retrieving of objects in S3 using industry-standard file protocols such as NFS and SMB.
enables quick and secure data migration with minimal to zero downtime
supports Full and Change Data Capture – CDC migration to support continuous replication for zero downtime migration.
homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations (using SCT) between different database platforms, such as Oracle or Microsoft SQL Server to Aurora.
Ideal for one-time huge data transfers usually for use cases with limited bandwidth from on-premises to AWS.
Understand use cases for data transfer using VPN (quick, slow, uses the Internet), Direct Connect (time to set up, private, recurring transfers), Snow Family (moderate time, private, one-time huge data transfers)
Agent ones can be used for hyper-v and physical services
Agentless can be used for VMware but does not track processes
AWS Migration Hub provides a central location to collect server and application inventory data for the assessment, planning, and tracking of migrations to AWS and also helps accelerate application modernization following migration.
VPN can provide a cost-effective, quick failover for Direct Connect.
VPN over Direct Connect provides a secure dedicated connection and requires a public virtual interface.
Direct Connect Gateway is a global network device that helps establish connectivity that spans VPCs spread across multiple AWS Regions with a single Direct Connect connection.
Secrets Manager supports random generation and automatic rotation of secrets, which is not provided by SSM Parameter Store.
Costs more than SSM Parameter Store.
Amazon Macie is a data security and data privacy service that uses ML and pattern matching to discover and protect sensitive data in S3.
AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation.
Lambda running in VPC requires NAT Gateway to communicate with external public services
Lambda CPU can be increased by increasing memory only.
helps define reserved concurrency limits to reduce the impact
Lambda Alias now supports canary deployments
Lambda supports docker containers
Reserved Concurrency guarantees the maximum number of concurrent instances for the function
Provisioned Concurrency provides greater control over the performance of serverless applications and helps keep functions initialized and hyper-ready to respond in double-digit milliseconds.
Step Functions helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines.
For least privilege, the role should be assigned to the Task.
awsvpc network mode gives ECS tasks the same networking properties as EC2 instances.
Disaster Recovery
Disaster Recovery whitepaper, although outdated, make sure you understand the differences and implementation for each type esp. pilot light, warm standby w.r.t RTO, and RPO.
Compute
Make components available in an alternate region,
Backup and Restore using either snapshots or AMIs that can be restored.
Use minimal low-scale capacity running which can be scaled once the failover happens
Use fully running compute in active-active confirmation with health checks.
CloudFormation to create, and scale infra as needed
Storage
S3 and EFS support cross-region replication
DynamoDB supports Global tables for multi-master, active-active inter-region storage needs.
RDS supports cross-region read replicas which can be promoted to master in case of a disaster. This can be done using Route 53, CloudWatch, and lambda functions.
Network
Route 53 failover routing with health checks to failover across regions.
CloudFront Origin Groups support primary and secondary endpoints with failover.
AWS Systems Manager and its various services like parameter store, patch manager
Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management. Does not support secrets rotation. Use Secrets Manager instead
Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
Patch Manager helps automate the process of patching managed instances with both security-related and other types of updates.
Handle disaster Recovery by automating the infra to replicate the environment across regions.
Deletion Policy to prevent, retain, or backup RDS, EBS Volumes
Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update. Stack Policy only applies for Stack updates and not stack deletion.
StackSets helps to create, update, or delete stacks across multiple accounts and Regions with a single operation.
helps with cost optimization and service limits in addition to security, performance and fault tolerance.
Compute Optimizer recommends optimal AWS resources for the workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics.
AWS Budgets to see usage-to-date and current estimated charges from AWS, set limits and provide alerts or notifications.
Cost Allocation Tags can be used to organize AWS resources, and cost allocation tags to track the AWS costs on a detailed level.
Cost Explorer helps visualize, understand, manage and forecast the AWS costs and usage over time.
Amazon WorkSpaces provides a virtual workspace for varied worker types, especially hybrid and remote workers.
Amazon Timestream is a fast, scalable, and serverless time-series database service that makes it easier to store and analyze trillions of events per day.
Amazon Connect is an omnichannel cloud contact center.
Amazon Pinpoint is a flexible, scalable marketing communications service that helps connects customers over email, SMS, push notifications or voice
Amazon Rekognition offers pre-trained and customizable computer vision capabilities to extract information and insights from images and videos
AWS Secrets Manager vs Systems Manager Parameter Store
AWS Secrets Manager helps protect secrets needed to access applications, services, and IT resources and can easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
AWS Systems Manager Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management and can store data such as passwords, database strings, etc.
Storage (Limits keep on upgrading)
AWS Systems Manager Parameter Store allows us to store up to
Standard tier – 10,000 parameters, each of which can be up to 4KB
Advanced tier – 100,000 parameters, each of which can be up to 8KB
AWS Secrets Manager will enable us to store up to 40,000 parameters, each of which can be up to 64kb.
Encryption
Encryption is optional for Systems Parameter Store
Encryption is mandatory for Secrets Manager and you cannot opt out.
Automated Secret Rotation
System Parameter Store does not support out-of-the-box secrets rotation.
AWS Secrets Manager enables database credential rotation on a schedule.
Cross-account Access
System Parameter Store does not support cross-account access
AWS Secrets Manager supports resource-based IAM policies that grant cross-account access.
Cost (keeps on changing)
Secrets Manager is comparatively costlier than the System Parameter store.
AWS Systems Manager Parameter Store comes with no additional cost for the Standard tier.
AWS Secrets Manager costs $0.40 per secret per month, and data retrieval costs $0.05 per 10,000 API calls.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
A company uses Amazon RDS for PostgreSQL databases for its data tier. The company must implement password rotation for the databases. Which solution meets this requirement with the LEAST operational overhead?
Store the password in AWS Secrets Manager. Enable automatic rotation on the secret.
Store the password in AWS Systems Manager Parameter Store. Enable automatic rotation on the parameter.
Store the password in AWS Systems Manager Parameter Store. Write an AWS Lambda function that rotates the password.
Store the password in AWS Key Management Service (AWS KMS). Enable automatic rotation on the customer master key (CMK).
EC2 Image Builder is a fully managed AWS service that makes it easier to automate the creation, management, and deployment of customized, secure, and up-to-date server images that are pre-installed and pre-configured with software and settings to meet specific IT standards
EC2 Image Builder simplifies the building, testing, and deployment of Virtual Machine and container images for use on AWS or on-premises.
Image Builder significantly reduces the effort of keeping images up-to-date and secure by providing a simple graphical interface, built-in automation, and AWS-provided security settings.
Image Builder removes any manual steps for updating an image without to need to build your own automation pipeline.
Image Builder provides a one-stop-shop to build, secure, and test up-to-date Virtual Machine and container images using common workflows.
Image Builder allows image validation for functionality, compatibility, and security compliance with AWS-provided tests and your own tests before using them in production.
Image Builder is offered at no cost, other than the cost of the underlying AWS resources used to create, store, and share the images.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
A company is running a website on Amazon EC2 instances that are in an Auto Scaling group. When the website traffic increases, additional instances take several minutes to become available because of a long-running user data script that installs software. An AWS engineer must decrease the time that is required for new instances to become available. Which action should the engineer take to meet this requirement?
Reduce the scaling thresholds so that instances are added before traffic increases.
Purchase Reserved Instances to cover 100% of the maximum capacity of the Auto Scaling group.
Update the Auto Scaling group to launch instances that have a storage optimized instance type.
Use EC2 Image Builder to prepare an Amazon Machine Image (AMI) that has pre-installed software.