AWS Application Discovery Service

AWS Application Discovery Service Agentless vs Agent

AWS Application Discovery Service

  • AWS Application Discovery Service helps plan migration to the AWS cloud by collecting usage and configuration data about the on-premises servers.
  • helps enterprises obtain a snapshot of the current state of their data center servers by collecting server specification information, hardware configuration, performance data, details of running processes, and network connections
  • is integrated with AWS Migration Hub,
    • which simplifies migration tracking as it aggregates migration status information into a single console.
    • can help view the discovered servers, group them into applications, and then track the migration status of each application.
  • discovered data for all the regions is stored in the AWS Migration Hub home Region.
  • The data can be exported for analysis in Microsoft Excel or AWS analysis tools such as Amazon Athena and Amazon QuickSight.
  • supports both agent and agentless-based on-premises tooling, in addition to file-based import for performing discovery and collecting data about the on-premises servers.

Application Discovery Service Modes

Agentless discovery

  • is an on-premises application that collects information through agentless methods.
  • can be performed by deploying the Agentless Collector (OVA file) through the VMware vCenter.
  • After Agentless Collector is configured,
    • it identifies VMs and hosts associated with vCenter.
    • collects the following static configuration data: Server hostnames, IP addresses, MAC addresses, and disk resource allocations.
    • Additionally, it collects the utilization data for each VM and computes average and peak utilization for metrics such as CPU, RAM, and Disk I/O.

Agent-based discovery

  • can be performed by deploying the Application Discovery Agent on each of the VMs and physical servers.
  • supports most Windows and Linux operating systems.
  • can be deployed on physical on-premises servers, EC2 instances, and virtual machines.
  • collects static configuration data, detailed time-series system-performance information, inbound and outbound network connections, and processes that are running.
  • pings the Discovery Service at 15-minute intervals for configuration information.
  • transmits data securely to the Discovery Service using TLS encryption.

AWS Application Discovery Service Agentless vs Agent

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is migrating its on-premises systems to AWS. The user environment consists of the following systems:
    • Windows and Linux virtual machines running on VMware.
    • Physical servers running Red Hat Enterprise Linux.
    The company wants to be able to perform the following steps before migrating to AWS:
    • Identify dependencies between on-premises systems.
    • Group systems together into applications to build migration plans.
    How can these requirements be met?

    1. Install the AWS Systems Manager Discovery Agent on each of the on-premises systems.
    2. Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems.
    3. Install the AWS Application Discovery Service Discovery Connector on each of the on-premises systems and in VMware vCenter.
    4. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Install the AWS Application Discovery Service Discovery Connector in VMware vCenter.

References

AWS Simple Notification Service – SNS

SNS Delivery Protocols

Simple Notification Service – SNS

  • Simple Notification Service – SNS is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients.
  • SNS provides the ability to create a Topic which is a logical access point and communication channel.
  • Each topic has a unique name that identifies the SNS endpoint for publishers to post messages and subscribers to register for notifications.
  • Producers and Consumers communicate asynchronously with subscribers by producing and sending a message on a topic.
  • Producers push messages to the topic, they created or have access to, and SNS matches the topic to a list of subscribers who have subscribed to that topic and delivers the message to each of those subscribers.
  • Subscribers receive all messages published to the topics to which they subscribe, and all subscribers to a topic receive the same messages.
  • Subscribers (i.e., web servers, email addresses, SQS queues, AWS Lambda functions) consume or receive the message or notification over one of the supported protocols (i.e., SQS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic.

SNS Delivery Protocols

Accessing SNS

  • Amazon Management console
    • Amazon Management console is the web-based user interface that can be used to manage SNS
  • AWS Command-line Interface (CLI)
    • Provides commands for a broad set of AWS products, and is supported on Windows, Mac, and Linux.
  • AWS Tools for Windows Powershell
    • Provides commands for a broad set of AWS products for those who script in the PowerShell environment
  • AWS SNS Query API
    • Query API allows for requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action
  • AWS SDK libraries
    • AWS provides libraries in various languages which provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses

SNS Supported Transport Protocols

  • HTTP, HTTPS – Subscribers specify a URL as part of the subscription registration; notifications will be delivered through an HTTP POST to the specified URL.
  • Email, Email-JSON – Messages are sent to registered addresses as email. Email-JSON sends notifications as a JSON object, while Email sends text-based email.
  • SQS – Users can specify an SQS queue as the endpoint; SNS will enqueue a notification message to the specified queue (which subscribers can then process using SQS APIs such as ReceiveMessage, DeleteMessage, etc.)
  • SMS – Messages are sent to registered phone numbers as SMS text messages

SNS Supported Endpoints

  • Email Notifications
    • SNS provides the ability to send Email notifications
  • Mobile Push Notifications
    • SNS provides an ability to send push notification messages directly to apps on mobile devices. Push notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts
    • Supported push notification services
      • Amazon Device Messaging (ADM)
      • Apple Push Notification Service (APNS)
      • Google Cloud Messaging (GCM)
      • Windows Push Notification Service (WNS) for Windows 8+ and Windows Phone 8.1+
      • Microsoft Push Notification Service (MPNS) for Windows Phone 7+
      • Baidu Cloud Push for Android devices in China
  • SQS Queues
    • SNS with SQS provides the ability for messages to be delivered to applications that require immediate notification of an event, and also persist in an SQS queue for other applications to process at a later time
    • SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates.
    • SQS can be used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components, without requiring each component to be concurrently available.
  • SMS Notifications
    • SNS provides the ability to send and receive Short Message Service (SMS) notifications to SMS-enabled mobile phones and smart phones
  • HTTP/HTTPS Endpoints
    • SNS provides the ability to send notification messages to one or more HTTP or HTTPS endpoints.When you subscribe an endpoint to a topic, you can publish a notification to the topic and Amazon SNS sends an HTTP POST request delivering the contents of the notification to the subscribed endpoint
  • Lambda
    • SNS and Lambda are integrated so Lambda functions can be invoked with SNS notifications.
    • When a message is published to an SNS topic that has a Lambda function subscribed to it, the Lambda function is invoked with the payload of the published message
  • Kinesis Data Firehose
    • Deliver events to delivery streams for archiving and analysis purposes.
    • Through delivery streams, events can be delivered to AWS destinations like S3, Redshift, and OpenSearch Service, or to third-party destinations such as Datadog, New Relic, MongoDB, and Splunk.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following notification endpoints or clients does Amazon Simple Notification Service support? Choose 2 answers
    1. Email
    2. CloudFront distribution
    3. File Transfer Protocol
    4. Short Message Service
    5. Simple Network Management Protocol
  2. What happens when you create a topic on Amazon SNS?
    1. The topic is created, and it has the name you specified for it.
    2. An ARN (Amazon Resource Name) is created
    3. You can create a topic on Amazon SQS, not on Amazon SNS.
    4. This question doesn’t make sense.
  3. A user has deployed an application on his private cloud. The user is using his own monitoring tool. He wants to configure that whenever there is an error, the monitoring tool should notify him via SMS. Which of the below mentioned AWS services will help in this scenario?
    1. None because the user infrastructure is in the private cloud/
    2. AWS SNS
    3. AWS SES
    4. AWS SMS
  4. A user wants to make so that whenever the CPU utilization of the AWS EC2 instance is above 90%, the redlight of his bedroom turns on. Which of the below mentioned AWS services is helpful for this purpose?
    1. AWS CloudWatch + AWS SES
    2. AWS CloudWatch + AWS SNS
    3. It is not possible to configure the light with the AWS infrastructure services
    4. AWS CloudWatch and a dedicated software turning on the light
  5. A user is trying to understand AWS SNS. To which of the below mentioned end points is SNS unable to send a notification?
    1. Email JSON
    2. HTTP
    3. AWS SQS
    4. AWS SES
  6. A user is running a webserver on EC2. The user wants to receive the SMS when the EC2 instance utilization is above the threshold limit. Which AWS services should the user configure in this case?
    1. AWS CloudWatch + AWS SES
    2. AWS CloudWatch + AWS SNS
    3. AWS CloudWatch + AWS SQS
    4. AWS EC2 + AWS CloudWatch
  7. A user is planning to host a mobile game on EC2 which sends notifications to active users on either high score or the addition of new features. The user should get this notification when he is online on his mobile device. Which of the below mentioned AWS services can help achieve this functionality?
    1. AWS Simple Notification Service
    2. AWS Simple Queue Service
    3. AWS Mobile Communication Service
    4. AWS Simple Email Service
  8. You are providing AWS consulting service for a company developing a new mobile application that will be leveraging amazon SNS push for push notifications. In order to send direct notification messages to individual devices each device registration identifier or token needs to be registered with SNS, however the developers are not sure of the best way to do this. You advise them to: –
    1. Bulk upload the device tokens contained in a CSV file via the AWS Management Console
    2. Let the push notification service (e.g. Amazon Device messaging) handle the registration
    3. Implement a token vending service to handle the registration
    4. Call the CreatePlatformEndpoint API function to register multiple device tokens. (Refer documentation)
  9. A company is running a batch analysis every hour on their main transactional DB running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible?
    1. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
    2. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
    3. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
    4. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
  10. Which of the following are valid SNS delivery transports? Choose 2 answers.
    1. HTTP
    2. UDP
    3. SMS
    4. DynamoDB
    5. Named Pipes
  11. What is the format of structured notification messages sent by Amazon SNS?
    1. An XML object containing MessageId, UnsubscribeURL, Subject, Message and other values
    2. An JSON object containing MessageId, DuplicateFlag, Message and other values
    3. An XML object containing MessageId, DuplicateFlag, Message and other values
    4. An JSON object containing MessageId, unsubscribeURL, Subject, Message and other values
  12. Which of the following are valid arguments for an SNS Publish request? Choose 3 answers.
    1. TopicAm
    2. Subject
    3. Destination
    4. Format
    5. Message
    6. Language

References

AWS Migration Hub

AWS Migration Hub

AWS Migration Hub

  • AWS Migration Hub provides a centralized, single place to discover the existing servers, plan migrations, and track the status of each application migration.
  • provides visibility into the application portfolio and streamlines planning and tracking.
  • helps visualize the connections and the status of the migrating servers and databases, regardless of which migration tool is used.
  • stores all the data in the selected Home Region and provides a single repository of discovery and migration planning information for the entire portfolio and a single view of migrations into multiple AWS Regions.
  • helps track the status of the migrations in all AWS Regions, provided the migration tools are available in that Region.
  • helps understand the environment by letting you explore information collected by AWS discovery tools and stored in the AWS Application Discovery Service’s repository.
  • supports migration status updates from the following tools:
  • migration tools send migration status to the selected Home Region
  • supports EC2 instance recommendations, that provide you with the ability to estimate the cost of running the existing servers in AWS.
  • supports Strategy Recommendations, that help easily build a migration and modernization strategy for the applications running on-premises or in AWS.

AWS Migration Hub

Migration Hub’s Strategy Recommendations

  • AWS Migration Hub’s Strategy Recommendations help easily build a migration and modernization strategy for the applications running on-premises or in AWS.
  • Strategy Recommendations provides guidance on the strategy and tools that help you migrate and modernize at scale.
  • Strategy Recommendations supports analysis for potential rehost (EC2) and replatform (managed environments such as RDS and Elastic BeanStalk, Containers, and OS upgrades) options for applications running on Windows Server 2003 or above or a wide variety of Linux distributions, including Ubuntu, RedHat, Oracle Linux, Debian, and Fedora.
  • Strategy Recommendations offers additional refactor analysis for custom applications written in C# and Java, and licensed databases (such as Microsoft SQL Server and Oracle).

EC2 Instance Recommendations

  • EC2 instance recommendations help analyze the data collected from each on-premises server, including server specification, CPU, and memory utilization, to recommend the most cost-effective, least expensive EC2 instance required to run the on-premises workload.
  • EC2 instance recommendations can be fine-tuned by specifying preferences for AWS purchasing options, AWS Region, EC2 instance type exclusions, and CPU/RAM utilization metric (average, peak, or percentile).

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants to migrate its on-premises data center to the AWS Cloud. This includes thousands of virtualized Linux and Microsoft Windows servers, SAN storage, Java and PHP applications with MYSQL, and Oracle databases. There are many department services hosted either in the same data center or externally.
    The technical documentation is incomplete and outdated. A solutions architect needs to understand the current environment and estimate the cloud resource costs after the migration.
    Which tools or services should be used to plan the cloud migration (Choose TWO.)

    1. AWS Application Discovery Service
    2. AWS SMS
    3. AWS X-Ray
    4. Amazon Inspector
    5. AWS Migration Hub

References

AWS_Migration_Hub

AWS Cloud Migration Services

AWS Cloud Migration Services

  • AWS Cloud Migration services help to address a lot of common use cases such as
    • cloud migration,
    • disaster recovery,
    • data center decommission, and
    • content distribution.
  • For migrating data from on-premises to AWS, the major aspect for consideration are
    • amount of data and network speed
    • data security in transit
    • existing application knowledge for recreation

Application & Database Cloud Migration Services

AWS Migration Hub

  • provides a centralized, single place to discover the existing servers, plan migrations, and track the status of each application migration.
  • provides visibility into the application portfolio and streamlines planning and tracking.
  • helps visualize the connections and the status of the migrating servers and databases, regardless of which migration tool is used.
  • stores all the data in the selected Home Region and provides a single repository of discovery and migration planning information for the entire portfolio and a single view of migrations into multiple AWS Regions.
  • helps track the status of the migrations in all AWS Regions, provided the migration tools are available in that Region.
  • helps understand the environment by letting you explore information collected by AWS discovery tools and stored in the AWS Application Discovery Service’s repository.
  • supports migration status updates from the following tools:
  • migration tools send migration status to the selected Home Region
  • supports EC2 instance recommendations, that provide you with the ability to estimate the cost of running the existing servers in AWS.
  • supports Strategy Recommendations, that help easily build a migration and modernization strategy for the applications running on-premises or in AWS.

AWS Application Discovery Service

  • AWS Application Discovery Service helps plan migration to the AWS cloud by collecting usage and configuration data about the on-premises servers.
  • helps enterprises obtain a snapshot of the current state of their data center servers by collecting server specification information, hardware configuration, performance data, details of running processes, and network connections
  • is integrated with AWS Migration Hub,
    • which simplifies migration tracking as it aggregates migration status information into a single console.
    • can help view the discovered servers, group them into applications, and then track the migration status of each application.
  • discovered data for all the regions is stored in the AWS Migration Hub home Region.
  • The data can be exported for analysis in Microsoft Excel or AWS analysis tools such as Amazon Athena and Amazon QuickSight.
  • supports both agent and agentless-based on-premises tooling, in addition to file-based import for performing discovery and collecting data about the on-premises servers.

AWS Server Migration Service (SMS)

  • is an agentless service that makes it easier and faster to migrate thousands of on-premises workloads to AWS.
  • helps automate, schedule, and track incremental replications of live server volumes, making it easier to coordinate large-scale server migrations.
  • currently supports migration of virtual machines from VMware vSphere,  Windows Hyper-V and Azure VM to AWS
  • supports migrating Windows Server 2003, 2008, 2012, and 2016, and Windows 7, 8, and 10; Red Hat Enterprise Linux (RHEL), SUSE/SLES, CentOS, Ubuntu, Oracle Linux, Fedora, and Debian Linux OS
  • replicates each server volume, which is saved as a new AMI, which can be launched as an EC2 instance
  • is a significant enhancement of EC2 VM Import/Export service
  • is used to Re-host

AWS Database Migration Service (DMS)

  • helps migrate databases to AWS quickly and securely.
  • source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
  • supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora.
  • monitors for replication tasks, network or host failures, and automatically provisions a host replacement in case of failures that can’t be repaired
  • supports both one-time data migration into RDS and EC2-based databases as well as for continuous data replication
  • supports continuous replication of the data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3
  • provides free AWS Schema Conversion Tool (SCT) that automates the conversion of Oracle PL/SQL and SQL Server T-SQL code to equivalent code in the Amazon Aurora / MySQL dialect of SQL or the equivalent PL/pgSQL code in PostgreSQL

AWS EC2 VM Import/Export

  • allows easy import of virtual machine images from existing environment to EC2 instances and export them back to on-premises environment
  • allows leveraging of existing investments in the virtual machines, built to meet compliance requirements, configuration management and IT security by bringing those virtual machines into EC2 as ready-to-use instances
  • Common usages include
    • Migrate Existing Applications and Workloads to EC2, allowing preserving of the software and settings configured in the existing VMs.
    • Copy Your VM Image Catalog to EC2
    • Create a Disaster Recovery Repository for your VM images

Data Transfer Services

VPN

  • connection utilizes IPSec to establish encrypted network connectivity between on-premises network and VPC over the Internet.
  • connections can be configured in minutes and a good solution for an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
  • still requires internet and be configured using VGW and CGW

AWS Direct Connect

  • provides a dedicated physical connection between the corporate network and AWS Direct Connect location with no data transfer over the Internet.
  • helps bypass Internet service providers (ISPs) in the network path
  • helps reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than with Internet-based connection
  • takes time to setup and involves third parties
  • are not redundant and would need another direct connect connection or a VPN connection
  •  Security
    • provides a dedicated physical connection without internet
    • For additional security can be used with VPN

AWS Import/Export (upgraded to Snowball)

  • accelerates moving large amounts of data into and out of AWS using secure Snowball appliances
  • AWS transfers the data directly onto and off of the storage devices using Amazon’s high-speed internal network, bypassing the Internet
  • Data Migration
    • for significant data size, AWS Import/Export is faster than Internet transfer is and more cost-effective than upgrading the connectivity
    • if loading the data over the Internet would take a week or more, AWS Import/Export should be considered
    • data from appliances can be imported to S3, Glacier and EBS volumes and exported from S3
    • not suitable for applications that cannot tolerate offline transfer time
  •  Security
    • Snowball uses an industry-standard Trusted Platform Module (TPM) that has a dedicated processor designed to detect any unauthorized modifications to the hardware, firmware, or software to physically secure the AWS Snowball device.

Snow Family

  • AWS Snowball
    • is a petabyte-scale data transfer service built around a secure suitcase-sized device that moves data into and out of the AWS Cloud quickly and efficiently.
    • transfers the data to S3 bucket
    • transfer times are about a week from start to finish.
    • are commonly used to ship terabytes or petabytes of analytics data, healthcare and life sciences data, video libraries, image repositories, backups, and archives as part of data center shutdown, tape replacement, or application migration projects.
  • AWS Snowball Edge devices
    • contain slightly larger capacity and an embedded computing platform that helps perform simple processing tasks.
    • can be rack shelved and may also be clustered together, making it simpler to collect and store data in extremely remote locations.
    • commonly used in environments with intermittent connectivity (such as manufacturing, industrial, and transportation); or in extremely remote locations (such as military or maritime operations) before shipping them back to AWS data centers.
    • delivers serverless computing applications at the network edge using AWS Greengrass and Lambda functions.
    • common use cases include capturing IoT sensor streams, on-the-fly media transcoding, image compression, metrics aggregation and industrial control signaling and alarming.
  • AWS Snowmobile
    • moves up to 100PB of data (equivalent to 1,250 AWS Snowball devices) in a 45-foot long ruggedized shipping container and is ideal for multi-petabyte or Exabyte-scale digital media migrations and datacenter shutdowns.
    • arrives at the customer site and appears as a network-attached data store for more secure, high-speed data transfer. After data is transferred to Snowmobile, it is driven back to an AWS Region where the data is loaded into S3.
    • is tamper-resistant, waterproof, and temperature controlled with multiple layers of logical and physical security — including encryption, fire suppression, dedicated security personnel, GPS tracking, alarm monitoring, 24/7 video surveillance, and an escort security vehicle during transit.

AWS Storage Gateway

  • connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and the AWS storage infrastructure
  • provides low-latency performance by maintaining frequently accessed data on-premises while securely storing all of the data encrypted in S3 or Glacier.
  • for disaster recovery scenarios, Storage Gateway, together with EC2, can serve as a cloud-hosted solution that mirrors the entire production environment
  • Data Migration
    • with gateway-cached volumes, S3 can be used to hold primary data while frequently accessed data is cached locally for faster access reducing the need to scale on premises storage infrastructure
    • with gateway-stored volumes, entire data is stored locally while asynchronously backing up data to S3
    • with gateway-VTL, offline data archiving can be performed by presenting existing backup application with an iSCSI-based VTL consisting of a virtual media changer and virtual tape drives
  •  Security
    • Encrypts all data in transit to and from AWS by using SSL/TLS.
    • All data in AWS Storage Gateway is encrypted at rest using AES-256.
    • Authentication between the gateway and iSCSI initiators can be secured by using Challenge-Handshake Authentication Protocol (CHAP).

Simple Storage Service – S3

  • Data Transfer
    • Files up to 5GB can be transferred using single operation
    • Multipart uploads can be used to upload files up to 5 TB and speed up data uploads by dividing the file into multiple parts
    • transfer rate still limited by the network speed
  •  Security
    • Data in transit can be secured by using SSL/TLS or client-side encryption.
    • Encrypt data at-rest by performing server-side encryption using Amazon S3-Managed Keys (SSE-S3), AWS Key Management Service (KMS)-Managed Keys (SSE-KMS), or Customer Provided Keys (SSE-C). Or by performing client-side encryption using AWS KMS–Managed Customer Master Key (CMK) or Client-Side Master Key.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your must architect the migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location. What options could you select to migrate the application to AWS? (Choose 2)
    1. Create an AWS Elastic Beanstalk application using the custom web server platform. Specify the web server executable and the application project and source files. Enable log file rotation to Amazon Simple Storage Service (S3). (EB does not work with Custom server executable)
    2. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a custom layer. Create custom recipes to install Docker and to deploy your Docker container using the Dockerfile. Create custom recipes to install and configure the application to publish the logs to Amazon CloudWatch Logs (although this is one of the option, the last sentence mentions configure the application to push the logs to S3, which would need changes to application as it needs to use SDK or CLI)
    3. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a Docker layer that uses the Dockerfile. Create custom recipes to install and configure Amazon Kinesis to publish the logs into Amazon CloudWatch. (Kinesis not needed)
    4. Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Use Docker configuration with awslogs and EB with Docker)
    5. Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI. (Use VM Import/Export to create AMI and CloudWatch logs agent to log)
  2. Your company hosts an on-premises legacy engineering application with 900GB of data shared via a central file server. The engineering data consists of thousands of individual files ranging in size from megabytes to multiple gigabytes. Engineers typically modify 5-10 percent of the files a day. Your CTO would like to migrate this application to AWS, but only if the application can be migrated over the weekend to minimize user downtime. You calculate that it will take a minimum of 48 hours to transfer 900GB of data using your company’s existing 45-Mbps Internet connection. After replicating the application’s environment in AWS, which option will allow you to move the application’s data to AWS without losing any data and within the given timeframe?
    1. Copy the data to Amazon S3 using multiple threads and multi-part upload for large files over the weekend, and work in parallel with your developers to reconfigure the replicated application environment to leverage Amazon S3 to serve the engineering files. (Still limited by 45 Mbps speed with minimum 48 hours when utilized to max)
    2. Sync the application data to Amazon S3 starting a week before the migration, on Friday morning perform a final sync, and copy the entire data set to your AWS file server after the sync completes. (Works best as the data changes can be propagated over the week and are fractional and downtime would be know)
    3. Copy the application data to a 1-TB USB drive on Friday and immediately send overnight, with Saturday delivery, the USB drive to AWS Import/Export to be imported as an EBS volume, mount the resulting EBS volume to your AWS file server on Sunday. (Downtime is not known when the data upload would be done, although Amazon says the same day the package is received)
    4. Leverage the AWS Storage Gateway to create a Gateway-Stored volume. On Friday copy the application data to the Storage Gateway volume. After the data has been copied, perform a snapshot of the volume and restore the volume as an EBS volume to be attached to your AWS file server on Sunday. (Still uses the internet)
  3. You are tasked with moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there’s no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose 3 answers)
    1. An AWS Direct Connect link between the VPC and the network housing the internal services
    2. An Internet Gateway to allow a VPN connection. (Virtual and Customer gateway is needed)
    3. An Elastic IP address on the VPC instance
    4. An IP address space that does not conflict with the one on-premises
    5. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies’ IP addresses
    6. A VM Import of the current virtual machine
  4. An enterprise runs 103 line-of-business applications on virtual machines in an on-premises data center. Many of the applications are simple PHP, Java, or Ruby web applications, are no longer actively developed, and serve little traffic. Which approach should be used to migrate these applications to AWS with the LOWEST infrastructure costs?
    1. Deploy the applications to single-instance AWS Elastic Beanstalk environments without a load balancer.
    2. Use AWS SMS to create AMIs for each virtual machine and run them in Amazon EC2.
    3. Convert each application to a Docker image and deploy to a small Amazon ECS cluster behind an Application Load Balancer.
    4. Use VM Import/Export to create AMIs for each virtual machine and run them in single-instance AWS Elastic Beanstalk environments by configuring a custom image.

References

AWS VPN

AWS VPC VPN

  • AWS VPN connections are used to extend on-premises data centers to AWS.
  • VPN connections provide secure IPSec connections between the data center or branch office and the AWS resources.
  • AWS Site-to-Site VPN or AWS Hardware VPN or AWS Managed VPN
    • Connectivity can be established by creating an IPSec, hardware VPN connection between the VPC and the remote network.
    • On the AWS side of the VPN connection, a Virtual Private Gateway (VGW) provides two VPN endpoints for automatic failover.
    • On the customer side, a customer gateway (CGW) needs to be configured, which is the physical device or software application on the remote side of the VPN connection
  • AWS Client VPN
    • AWS Client VPN is a managed client-based VPN service that enables secure access to AWS resources and resources in the on-premises network.
  • AWS VPN CloudHub
    • For more than one remote network e.g. multiple branch offices, multiple AWS hardware VPN connections can be created via the VPC to enable communication between these networks
  • AWS Software VPN
    • A VPN connection can be created to the remote network by using an EC2 instance in the VPC that’s running a third-party software VPN appliance.
    • AWS does not provide or maintain third-party software VPN appliances; however, there is a range of products provided by partners and open source communities.
  • AWS Direct Connect provides a dedicated private connection from a remote network to the VPC. Direct Connect can be combined with an AWS hardware VPN connection to create an IPsec-encrypted connection

VPN Components

AWS VPN Components

  • Virtual Private Gateway – VGW
    • A virtual private gateway is the VPN concentrator on the AWS side of the VPN connection
  • Customer Gateway – CGW
    • A customer gateway is a physical device or software application on the customer side of the VPN connection.
    • When a VPN connection is created, the VPN tunnel comes up when traffic is generated from the remote side of the VPN connection.
    • By default, VGW is not the initiator; CGW must bring up the tunnels for the Site-to-Site VPN connection by generating traffic and initiating the Internet Key Exchange (IKE) negotiation process.
    • If the VPN connection experiences a period of idle time, usually 10  seconds, depending on the configuration, the tunnel may go down. To prevent this, a network monitoring tool to generate keepalive pings; for e.g. by using IP SLA.
  • Transit Gateway
    • A transit gateway is a transit hub that can be used to interconnect VPCs and on-premises networks.
    • A Site-to-Site VPN connection on a transit gateway can support either IPv4 traffic or IPv6 traffic inside the VPN tunnels.
  • A Site-to-Site VPN connection offers two VPN tunnels between a VGW or a transit gateway on the AWS side, and a CGW (which represents a VPN device) on the remote (on-premises) side.

VPN Routing Options

  • For a VPN connection, the route table for the subnets should be updated with the type of routing (static or dynamic) that you plan to use.
  • Route tables determine where network traffic is directed. Traffic destined for the VPN connections must be routed to the virtual private gateway.
  • The type of routing can depend on the make and model of the CGW device
    • Static Routing
      • If your device does not support BGP, specify static routing.
      • Using static routing, the routes (IP prefixes) can be specified that should be communicated to the virtual private gateway.
      • Devices that don’t support BGP may also perform health checks to assist failover to the second tunnel when needed.
    • BGP Dynamic Routing
      • If the VPN device supports Border Gateway Protocol (BGP), specify dynamic routing with the VPN connection.
      • When using a BGP device, static routes need not be specified to the VPN connection because the device uses BGP for auto-discovery and to advertise its routes to the virtual private gateway.
      • BGP-capable devices are recommended as the BGP protocol offers robust liveness detection checks that can assist failover to the second VPN tunnel if the first tunnel goes down.
  • Only IP prefixes known to the virtual private gateway, either through BGP advertisement or static route entry, can receive traffic from the VPC.
  • Virtual private gateway does not route any other traffic destined outside of the advertised BGP, static route entries, or its attached VPC CIDR.

VPN Route Priority

  • Longest prefix match applies.
  • If the prefixes are the same, then the VGW prioritizes routes as follows, from most preferred to least preferred:
    • BGP propagated routes from an AWS Direct Connect connection
    • Manually added static routes for a Site-to-Site VPN connection
    • BGP propagated routes from a Site-to-Site VPN connection
    • Prefix with the shortest AS PATH is preferred for matching prefixes where each Site-to-Site VPN connection uses BGP
    • Path with the lowest multi-exit discriminators (MEDs) value is preferred when the AS PATHs are the same length and if the first AS in the AS_SEQUENCE is the same across multiple paths.

VPN Limitations

  • supports only IPSec tunnel mode. Transport mode is currently not supported.
  • supports only one VGW can be attached to a VPC at a time.
  • does not support IPv6 traffic on a virtual private gateway.
  • does not support Path MTU Discovery.
  • does not support overlapping CIDR blocks for the networks. It is recommended to use non-overlapping CIDR blocks.
  • does not support transitive routing. So for traffic from on-premises to AWS via a virtual private gateway, it
    • does not support Internet connectivity through Internet Gateway
    • does not support Internet connectivity through NAT Gateway
    • does not support VPC Peered resources access through VPC Peering
    • does not support S3, DynamoDB access through VPC Gateway Endpoint
    • However, Internet connectivity through NAT instance and VPC Interface Endpoint or PrivateLink services are accessible.
  • provides a bandwidth of 1.25 Gbps, currently.

VPN Monitoring

  • AWS Site-to-Site VPN automatically sends notifications to the AWS AWS Health Dashboard
  • AWS Site-to-Site VPN is integrated with CloudWatch with the following metrics available
    • TunnelState
      • The state of the tunnels.
      • For static VPNs, 0 indicates DOWN and 1 indicates UP.
      • For BGP VPNs, 1 indicates ESTABLISHED and 0 is used for all other states.
      • For both types of VPNs, values between 0 and 1 indicate at least one tunnel is not UP.
    • TunnelDataIn
      • The bytes received on the AWS side of the connection through the VPN tunnel from a customer gateway.
    • TunnelDataOut
      • The bytes sent from the AWS side of the connection through the VPN tunnel to the customer gateway.

VPN Connection Redundancy

VPN Connection Redundancy

  • A VPN connection is used to connect the customer network to a VPC.
  • Each VPN connection has two tunnels to help ensure connectivity in case one of the VPN connections becomes unavailable, with each tunnel using a unique virtual private gateway public IP address.
  • Both tunnels should be configured for redundancy.
  • When one tunnel becomes unavailable, for e.g. down for maintenance, network traffic is automatically routed to the available tunnel for that specific VPN connection.
  • To protect against a loss of connectivity in case the customer gateway becomes unavailable, a second VPN connection can be set up to the VPC and virtual private gateway by using a second customer gateway.
  • Customer gateway IP address for the second VPN connection must be publicly accessible.
  • By using redundant VPN connections and CGWs, maintenance on one of the customer gateways can be performed while traffic continues to flow over the second customer gateway’s VPN connection.
  • Dynamically routed VPN connections using the Border Gateway Protocol (BGP) are recommended, if available, to exchange routing information between the customer gateways and the virtual private gateways.
  • Statically routed VPN connections require static routes for the network to be entered on the customer gateway side.
  • BGP-advertised and statically entered route information allow gateways on both sides to determine which tunnels are available and reroute traffic if a failure occurs.

Multiple Site-to-Site VPN Connections

VPN Connection

  • VPC has an attached virtual private gateway, and the remote network includes a customer gateway, which must be configured to enable the
    VPN connection.
  • Routing must be set up so that any traffic from the VPC bound for the remote network is routed to the virtual private gateway.
  • Each VPN has two tunnels associated with it that can be configured on the customer router, as is not a single point of failure
  • Multiple VPN connections to a single VPC can be created, and a second CGW can be configured to create a redundant connection to the same external location or to create VPN connections to multiple geographic locations.

VPN CloudHub

  • VPN CloudHub can be used to provide secure communication between multiple on-premises sites if you have multiple VPN connections
  • VPN CloudHub operates on a simple hub-and-spoke model using a Virtual Private gateway in a detached mode that can be used without a VPC.
  • Design is suitable for customers with multiple branch offices and existing
    Internet connections who’d like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices

VPN CloudHub Architecture

  • VPN CloudHub architecture with blue dashed lines indicates network
    traffic between remote sites being routed over their VPN connections.
  • AWS VPN CloudHub requires a virtual private gateway with multiple customer gateways.
  • Each customer gateway must use a unique Border Gateway Protocol (BGP) Autonomous System Number (ASN)
  • Customer gateways advertise the appropriate routes (BGP prefixes) over their VPN connections.
  • Routing advertisements are received and re-advertised to each BGP peer, enabling each site to send data to and receive data from the other sites.
  • Routes for each spoke must have unique ASNs and the sites must not have overlapping IP ranges.
  • Each site can also send and receive data from the VPC as if they were using a standard VPN connection.
  • Sites that use AWS Direct Connect connections to the virtual private gateway can also be part of the AWS VPN CloudHub.
  • To configure the AWS VPN CloudHub,
    • multiple customer gateways can be created, each with the unique public IP address of the gateway and the ASN.
    • a VPN connection can be created from each customer gateway to a common virtual private gateway.
    • each VPN connection must advertise its specific BGP routes. This is done using the network statements in the VPN configuration files for the VPN connection.

VPN vs Direct Connect

AWS Direct Connect vs VPN

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You have in total 5 offices, and the entire employee-related information is stored under AWS VPC instances. Now all the offices want to connect the instances in VPC using VPN. Which of the below help you to implement this?
    1. you can have redundant customer gateways between your data center and your VPC
    2. you can have multiple locations connected to the AWS VPN CloudHub
    3. You have to define 5 different static IP addresses in route table.
    4. 1 and 2
    5. 1,2 and 3
  2. You have in total of 15 offices, and the entire employee-related information is stored under AWS VPC instances. Now all the offices want to connect the instances in VPC using VPN. What problem do you see in this scenario?
    1. You can not create more than 1 VPN connections with single VPC (Can be created)
    2. You can not create more than 10 VPN connections with single VPC (soft limit can be extended)
    3. When you create multiple VPN connections, the virtual private gateway can not sends network traffic to the appropriate VPN connection using statically assigned routes. (Can route the traffic to correct connection)
    4. Statically assigned routes cannot be configured in case of more than 1 VPN with the virtual private gateway. (can be configured)
    5. None of above
  3. You have been asked to virtually extend two existing data centers into AWS to support a highly available application that depends on existing, on-premises resources located in multiple data centers and static content that is served from an Amazon Simple Storage Service (S3) bucket. Your design currently includes a dual-tunnel VPN connection between your CGW and VGW. Which component of your architecture represents a potential single point of failure that you should consider changing to make the solution more highly available?
    1. Add another VGW in a different Availability Zone and create another dual-tunnel VPN connection.
    2. Add another CGW in a different data center and create another dual-tunnel VPN connection. (Refer link)
    3. Add a second VGW in a different Availability Zone, and a CGW in a different data center, and create another dual-tunnel.
    4. No changes are necessary: the network architecture is currently highly available.
  4. You are designing network connectivity for your fat client application. The application is designed for business travelers who must be able to connect to it from their hotel rooms, cafes, public Wi-Fi hotspots, and elsewhere on the Internet. You do not want to publish the application on the Internet. Which network design meets the above requirements while minimizing deployment and operational costs? [PROFESSIONAL]
    1. Implement AWS Direct Connect, and create a private interface to your VPC. Create a public subnet and place your application servers in it. (High Cost and does not minimize deployment)
    2. Implement Elastic Load Balancing with an SSL listener that terminates the back-end connection to the application. (Needs to be published to internet)
    3. Configure an IPsec VPN connection, and provide the users with the configuration details. Create a public subnet in your VPC, and place your application servers in it. (Instances still in public subnet are internet accessible)
    4. Configure an SSL VPN solution in a public subnet of your VPC, then install and configure SSL VPN client software on all user computers. Create a private subnet in your VPC and place your application servers in it. (Cost effective and can be in private subnet as well)
  5. You are designing a connectivity solution between on-premises infrastructure and Amazon VPC Your server’s on-premises will De communicating with your VPC instances You will De establishing IPSec tunnels over the internet You will be using VPN gateways and terminating the IPsec tunnels on AWS-supported customer gateways. Which of the following objectives would you achieve by implementing an IPSec tunnel as outlined above? (Choose 4 answers) [PROFESSIONAL]
    1. End-to-end protection of data in transit
    2. End-to-end Identity authentication
    3. Data encryption across the Internet
    4. Protection of data in transit over the Internet
    5. Peer identity authentication between VPN gateway and customer gateway
    6. Data integrity protection across the Internet
  6. A development team that is currently doing a nightly six-hour build which is lengthening over time on-premises with a large and mostly under utilized server would like to transition to a continuous integration model of development on AWS with multiple builds triggered within the same day. However, they are concerned about cost, security and how to integrate with existing on-premises applications such as their LDAP and email servers, which cannot move off-premises. The development environment needs a source code repository; a project management system with a MySQL database resources for performing the builds and a storage location for QA to pick up builds from. What AWS services combination would you recommend to meet the development team’s requirements? [PROFESSIONAL]
    1. A Bastion host Amazon EC2 instance running a VPN server for access from on-premises, Amazon EC2 for the source code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIP for the source code repository and project management system, Amazon SQL for a build queue, An Amazon Auto Scaling group of Amazon EC2 instances for performing builds and Amazon Simple Email Service for sending the build output. (Bastion is not for VPN connectivity also SES should not be used)
    2. An AWS Storage Gateway for connecting on-premises software applications with cloud-based storage securely, Amazon EC2 for the resource code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, Amazon Simple Notification Service for a notification initiated build, An Auto Scaling group of Amazon EC2 instances for performing builds and Amazon S3 for the build output. (Storage Gateway does provide secure connectivity but still needs VPN. SNS alone cannot handle builds)
    3. An AWS Storage Gateway for connecting on-premises software applications with cloud-based storage securely, Amazon EC2 for the resource code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, Amazon SQS for a build queue, An Amazon Elastic Map Reduce (EMR) cluster of Amazon EC2 instances for performing builds and Amazon CloudFront for the build output. (Storage Gateway does not provide secure connectivity, still needs VPN. EMR is not ideal for performing builds as it needs normal EC2 instances)
    4. A VPC with a VPN Gateway back to their on-premises servers, Amazon EC2 for the source-code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, SQS for a build queue, An Auto Scaling group of EC2 instances for performing builds and S3 for the build output. (VPN gateway is required for secure connectivity. SQS for build queue and EC2 for builds)

References

AWS VPC Interface Endpoints – PrivateLink

AWS Interface Endpoints - PrivateLinks

VPC Interface Endpoints – PrivateLink

  • VPC Interface endpoints enable connectivity to services powered by AWS PrivateLink.
  • Services include AWS services like CloudTrail, CloudWatch, etc., services hosted by other AWS customers and partners in their own VPCs (referred to as endpoint services), and supported AWS Marketplace partner services.
  • VPC Interface Endpoints only allow traffic from VPC resources to the endpoints and not vice versa
  • PrivateLink endpoints can be accessed across both intra- and inter-region VPC peering connections, Direct Connect, and VPN connections.
  • VPC Interface Endpoints, by default, have an address like vpce-svc-01234567890abcdef.us-east-1.vpce.amazonaws.com which needs application changes to point to the service.
  • Private DNS name feature allows consumers to use AWS service public default DNS names which would point to the private VPC endpoint service.
  • Interface Endpoints can be used to create custom applications in VPC and configure them as an AWS PrivateLink-powered service (referred to as an endpoint service) exposed through a Network Load Balancer.
  • Custom applications can be hosted within AWS or on-premises (via Direct Connect or VPN)

AWS Interface Endpoints - PrivateLinks

 

Interface Endpoints Configuration

  • Create an interface endpoint, and provide the name of the AWS service, endpoint service, or AWS Marketplace service
  • Choose the subnet to use the interface endpoint by creating an endpoint network interface.
  • An endpoint network interface is assigned a private IP address from the IP address range of the subnet and keeps this IP address until the interface endpoint is deleted
  • A private IP address also ensures the traffic remains private without any changes to the route table.

VPC Endpoint policy

  • VPC Endpoint policy is an IAM resource policy attached to an endpoint for controlling access from the endpoint to the specified service.
  • Endpoint policy, by default, allows full access to any user or service within the VPC, using credentials from any AWS account to any S3 resource; including S3 resources for an AWS account other than the account with which the VPC is associated
  • Endpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket policies).
  • Endpoint policy can be used to restrict which specific resources can be accessed using the VPC Endpoint.

Interface Endpoint Limitations

  • For each interface endpoint, only one subnet per AZ can be selected.
  • Interface Endpoint supports TCP traffic only.
  • Endpoints are supported within the same region only.
  • Endpoints support IPv4 traffic only.
  • Each interface endpoint can support a bandwidth of up to 10 Gbps per AZ, by default, and automatically scales to 40 Gbps. Additional capacity may be added by reaching out to AWS support.
  • NACLs for the subnet can restrict traffic, and needs to be configured properly
  • Endpoints cannot be transferred from one VPC to another, or from one service to another.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An application server needs to be in a private subnet without access to the internet. The solution must retrieve and upload data to an Amazon Kinesis. How should a Solutions Architect design a solution to meet these requirements?
    1. Use Amazon VPC Gateway endpoints
    2. Use a NAT Gateway
    3. Use Amazon VPC Interface endpoints
    4. Use a private Amazon Kinesis Data Stream

References

AWS_PrivateLink

AWS VPC Gateway Endpoints

AWS VPC Gateway Endpoints

AWS VPC Gateway Endpoints

  • A VPC Gateway Endpoint is a gateway that is a target for a specified route in the route table, used for traffic destined for a supported AWS service.
  • VPC Gateway Endpoints currently supports S3 and DynamoDB services
  • VPC Gateway Endpoints do not require an Internet gateway or a NAT device for the VPC.
  • Gateway endpoints do not enable AWS PrivateLink.
  • VPC Endpoint policy and Resource-based policies can be used for fine-grained access control.
AWS VPC Gateway Endpoints

 

Gateway Endpoint Configuration

  • Endpoint requires the VPC and the service to be accessed via the endpoint.
  • The endpoint needs to be associated with the Route table and the route table cannot be modified to remove the route entry. It can only be deleted by removing the Endpoint association with the Route table
  • A route is automatically added to the Route table with a destination that specifies the prefix list of service and the target with the endpoint id for e.g. A rule with destination pl-68a54001 (com.amazonaws.us-west-2.s3) and a target with this endpoints’ ID (e.g. vpce-12345678) will be added to the route tables
  • Access to the resources in other services can be controlled by endpoint policies
  • Security groups need to be modified to allow outbound traffic from the VPC to the service that is specified in the endpoint. Use the service prefix list ID for e.g. com.amazonaws.us-east-1.s3 as the destination in the outbound rule
  • Multiple endpoints can be created in a single VPC, for e.g., to multiple services.
  • Multiple endpoints can be created for the same service but in different route tables.
  • Multiple endpoints to the same service CAN NOT be specified in a single route table

Gateway Endpoint Limitations

  • are regional and supported within the same Region only.
  • cannot be created between a VPC and an AWS service in a different region.
  • support IPv4 traffic only.
  • cannot be transferred from one VPC to another, or from one service to another service.
  • connections cannot be extended out of a VPC i.e. resources across the VPN, VPC peering, Direct Connect connection cannot use the endpoint.

VPC Endpoint policy

  • VPC Endpoint policy is an IAM resource policy attached to an endpoint for controlling access from the endpoint to the specified service.
  • Endpoint policy, by default, allows full access to any user or service within the VPC, using credentials from any AWS account to any S3 resource; including S3 resources for an AWS account other than the account with which the VPC is associated
  • Endpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket policies).
  • Endpoint policy can be used to restrict which specific resources can be accessed using the VPC Endpoint.

S3 Bucket Policies

  • IAM policy or bucket policy can’t be used to allow access from a VPC IPv4 CIDR range as the VPC CIDR blocks can be overlapping or identical, which might lead to unexpected results.
  • aws:SourceIp condition can’t be used in the IAM policies for requests to S3 through a VPC endpoint.
  • S3 Bucket Policies can be used to restrict access through the VPC endpoint only.

VPC Gateway Endpoint Troubleshooting

  • Verify the services are within the same region.
  • DNS resolution must be enabled in the VPC
  • Route table should have a route to S3 using the gateway VPC endpoint.
  • Security groups should have outbound traffic allowed VPC endpoint.
  • NACLs should allow inbound and outbound traffic.
  • Gateway Endpoint Policy should define access to the resource
  • Resource-based policies like the S3 bucket policy should allow access to the VPC endpoint or the VPC.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You have an application running on an Amazon EC2 instance that uploads 10 GB video objects to amazon S3. Video uploads are taking longer than expected inspite of using multipart upload cause of internet bandwidth, resulting in poor application performance. Which action can help improve the upload performance?
    1. Apply an Amazon S3 bucket policy
    2. Use Amazon EBS provisioned IOPS
    3. Use VPC endpoints for S3
    4. Request a service limit increase
  2. What are the services supported by VPC endpoints, using Gateway endpoint type? Choose 2 answers
    1. Amazon S3
    2. Amazon EFS
    3. Amazon DynamoDB
    4. Amazon Glacier
    5. Amazon SQS
  3. An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?
    1. Access the data through an Internet Gateway.
    2. Access the data through a VPN connection.
    3. Access the data through a NAT Gateway.
    4. Access the data through a VPC endpoint for Amazon S3.

References

AWS_VPC_Gateway_Endpoints

AWS VPC Endpoints

VPC Endpoints

AWS VPC Endpoints

  • VPC Endpoints enable the creation of a private connection between VPC to supported AWS services and VPC endpoint services powered by PrivateLink using its private IP address
  • Endpoints do not require a public IP address, access over the Internet, NAT device, a VPN connection, or AWS Direct Connect.
  • Traffic between VPC and AWS service does not leave the Amazon network
  • Endpoints are virtual devices, that are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in the VPC and AWS services without imposing availability risks or bandwidth constraints on your network traffic.
  • Endpoints currently do not support cross-region requests, ensure that the endpoint is created in the same region as the S3 bucket
  • AWS currently supports the following types of Endpoints

VPC Endpoints

VPC Gateway Endpoints

  • A VPC Gateway Endpoint is a gateway that is a target for a specified route in the route table, used for traffic destined for a supported AWS service.
  • Gateway Endpoints currently supports S3 and DynamoDB services
  • Gateway Endpoints do not require an Internet gateway or a NAT device for the VPC.
  • Gateway endpoints do not enable AWS PrivateLink.
  • VPC Endpoint policy and Resource-based policies can be used for fine-grained access control.
"AWS

VPC Interface Endpoints – PrivateLink

AWS Private Links

  • VPC Interface endpoints enable connectivity to services powered by AWS PrivateLink.
  • Services include AWS services like CloudTrail, CloudWatch, etc., services hosted by other AWS customers and partners in their own VPCs (referred to as endpoint services), and supported AWS Marketplace partner services.
  • Interface Endpoints only allow traffic from VPC resources to the endpoints and not vice versa
  • PrivateLink endpoints can be accessed across both intra- and inter-region VPC peering connections, Direct Connect, and VPN connections.
  • VPC Interface Endpoints, by default, have an address like vpce-svc-01234567890abcdef.us-east-1.vpce.amazonaws.com which needs application changes to point to the service.
  • Private DNS name feature allows consumers to use AWS service public default DNS names which would point to the private VPC endpoint service.
  • Interface Endpoints can be used to create custom applications in VPC and configure them as an AWS PrivateLink-powered service (referred to as an endpoint service) exposed through a Network Load Balancer.
  • Custom applications can be hosted within AWS or on-premises (via Direct Connect or VPN)

S3 VPC Endpoints Strategy

S3 is now accessible with both Gateway Endpoints and Interface Endpoints.

S3 Strategy - VPC Gateway Endpoints vs VPC Interface Endpoints

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You have an application running on an Amazon EC2 instance that uploads 10 GB video objects to amazon S3. Video uploads are taking longer than expected inspite of using multipart upload cause of internet bandwidth, resulting in poor application performance. Which action can help improve the upload performance?
    1. Apply an Amazon S3 bucket policy
    2. Use Amazon EBS provisioned IOPS
    3. Use VPC endpoints for S3
    4. Request a service limit increase
  2. What are the services supported by VPC endpoints, using Gateway endpoint type? Choose 2 answers
    1. Amazon S3
    2. Amazon EFS
    3. Amazon DynamoDB
    4. Amazon Glacier
    5. Amazon SQS
  3. What are the different types of endpoint types supported by VPC endpoints? Choose 2 Answers
    1. Gateway
    2. Classic
    3. Interface
    4. Virtual
    5. Network
  4. An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?
    1. Access the data through an Internet Gateway.
    2. Access the data through a VPN connection.
    3. Access the data through a NAT Gateway.
    4. Access the data through a VPC endpoint for Amazon S3.
  5. You need to design a VPC for a three-tier architecture, a web application consisting of an Elastic Load Balancer (ELB), a fleet of web/application servers, and a backend consisting of an RDS database. The entire Infrastructure must be distributed over 2 availability zones. Which VPC configuration works while assuring the least components are exposed to Internet?
    1. Two public subnets for ELB, two private subnets for the web-servers, two private subnets for RDS and DynamoDB
    2. Two public subnets for ELB and web-servers, two private subnets for RDS and DynamoDB
    3. Two public subnets for ELB, two private subnets for the web-servers, two private subnets for RDS and VPC Endpoints for DynamoDB
    4. Two public subnets for ELB and web-servers, two private subnets for RDS and VPC Endpoints for DynamoDB

References

AWS_VPC_User_Guide_-_Endpoints

AWS VPC Peering

AWS VPC Peering

VPC Peering

  • A VPC peering connection is a networking connection between two VPCs that enables routing of traffic between them using private IPv4 addresses or IPv6 addresses.
  • VPC peering connection
    • can be established between your own VPCs, or with a VPC in another AWS account in the same or different region.
    • is a one-to-one relationship between two VPCs.
    • supports intra and inter-region peering connections.
  • With VPC peering,
    • Instances in either VPC can communicate with each other as if they are within the same network 
    • AWS uses the existing infrastructure of a VPC to create a peering connection; it is neither a gateway nor a VPN connection and does not rely on a separate piece of physical hardware. 
    • There is no single point of failure for communication or a bandwidth bottleneck
    • All inter-region traffic is encrypted with no single point of failure, or bandwidth bottleneck. Traffic always stays on the global AWS backbone, and never traverses the public internet, which reduces threats, such as common exploits, and DDoS attacks.
  • VPC peering does not have any separate charges. However, there are data transfer charges.

AWS VPC Peering

VPC Peering Connectivity

  • To create a VPC peering connection, the owner of the requester VPC sends a request to the owner of the accepted VPC.
  • Accepter VPC can be owned by the same account or a different AWS account.
  • Once the Accepter VPC accepts the peering connection request, the peering connection is activated.
  • Route tables on both the VPCs should be manually updated to allow traffic
  • Security groups on the instances should allow traffic to and from the peered VPCs.

VPC Peering Limitations & Rules

  1. Does not support Overlapping or matching IPv4 or IPv6 CIDR blocks.
  2. Does not support transitive peering relationships i.e. the VPC does not have access to any other VPCs that the peer VPC may be peered with even if established entirely within your own AWS account
  3. Does not support Edge to Edge Routing Through a Gateway or Private Connection
  4. In a VPC peering connection, the VPC does not have access to any other connection that the peer VPC may have and vice versa. Connections that the peer VPC can include
    1. A VPN connection or an AWS Direct Connect connection to a corporate network
    2. An Internet connection through an Internet gateway
    3. An Internet connection in a private subnet through a NAT device
    4. A ClassicLink connection to an EC2-Classic instance
    5. A VPC endpoint to an AWS service; for example, an endpoint to S3.
  5. VPC peering connections are limited on the number of active and pending peering connections that you can have per VPC.
  6. Only one peering connection can be established between the same two VPCs at the same time.
  7. Jumbo frames are supported for peering connections within the same region.
  8. A placement group can span peered VPCs that are in the same region; however, you do not get full-bisection bandwidth between instances in peered VPCs
  9. Inter-region VPC peering connections
    1. The Maximum Transmission Unit (MTU) across an inter-region peering connection is 1500 bytes. Jumbo frames are not supported.
    2. Security group rule that references a peer VPC security group cannot be created.
  10. Any tags created for the peering connection are only applied in the account or region in which they were created
  11. Unicast reverse path forwarding in peering connections is not supported
  12. Circa July 2016, Instance’s Public DNS can now be resolved to its private IP address across peered VPCs. The instance’s public DNS hostname does not resolve to its private IP address across peered VPCs.

VPC Peering Troubleshooting

  • Verify that the VPC peering connection is in the Active state.
  • Be sure to update the route tables for the peering connection. Verify that the correct routes exist for connections to the IP address range of the peered VPCs through the appropriate gateway.
  • Verify that an ALLOW rule exists in the network access control (NACL) table for the required traffic.
  • Verify that the security group rules allow network traffic between the peered VPCs.
  • Verify using VPC flow logs that the required traffic isn’t rejected at the source or destination. This rejection might occur due to the permissions associated with security groups or network ACLs.
  • Be sure that no firewall rules block network traffic between the peered VPCs. Use network utilities such as traceroute (Linux) or tracert (Windows) to check rules for firewalls such as iptables (Linux) or Windows Firewall (Windows).

VPC Peering Architecture

AWS VPC Architecture

  • VPC Peering can be applied to create shared services or perform authentication with an on-premises instance
  • This would help create a single point of contact, as well limiting the VPN connections to a single account or VPC

VPC Peering vs Transit Gateway

VPC Peering vs Transit VPC vs Transit Gateway

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You currently have 2 development environments hosted in 2 different VPCs in an AWS account in the same region. There is now a need for resources from one VPC to access another. How can this be accomplished?
    1. Establish a Direct Connect connection.
    2. Establish a VPN connection.
    3. Establish VPC Peering.
    4. Establish Subnet Peering.
  2. A company has an AWS account that contains three VPCs (Dev, Test, and Prod) in the same region. Test is peered to both Prod and Dev. All VPCs have non-overlapping CIDR blocks. The company wants to push minor code releases from Dev to Prod to speed up the time to market. Which of the following options helps the company accomplish this?
    1. Create a new peering connection Between Prod and Dev along with appropriate routes.
    2. Create a new entry to Prod in the Dev route table using the peering connection as the target.
    3. Attach a second gateway to Dev. Add a new entry in the Prod route table identifying the gateway as the target.
    4. The VPCs have non-overlapping CIDR blocks in the same account. The route tables contain local routes for all VPCs.
  3. A company has 2 AWS accounts that have individual VPCs. The VPCs are in different AWS regions and need to communicate with each other. The VPCs have non-overlapping CIDR blocks. Which of the following would be a cost-effective connectivity option?
    1. Use VPN connections
    2. Use VPC peering between the 2 VPC’s
    3. Use AWS Direct Connect
    4. Use a NAT gateway

References

AWS_VPC_Peering

AWS Certified Solutions Architect – Professional (SAP-C02) Exam Learning Path

AWS Certified Solutions Architect - Professional Exam Certificate

AWS Certified Solutions Architect – Professional (SAP-C02) Exam Learning Path

  • AWS Certified Solutions Architect – Professional (SAP-C02) exam is the upgraded pattern of the previous Solution Architect – Professional SAP-C01 exam and was released in Nov. 2022.
  • SAP-C02 is quite similar to SAP-C01 but has included some new services.

AWS Certified Solutions Architect – Professional (SAP-C02) Exam Content

  • AWS Certified Solutions Architect – Professional (SAP-C02) exam validates the ability to complete tasks within the scope of the AWS Well-Architected Framework
    • Design for organizational complexity
    • Design for new solutions
    • Continuously improve existing solutions
    • Accelerate workload migration and modernization

Refer to AWS Certified Solutions Architect – Professional Exam Guide

AWS Certified Solutions Architect - Professional Exam Domains

AWS Certified Solutions Architect – Professional (SAP-C02) Exam Resources

AWS Certified Solutions Architect – Professional (SAP-C02) Exam Summary

  • Professional exams are tough, lengthy, and tiresome. Most of the questions and answers options have a lot of prose and a lot of reading that needs to be done, so be sure you are prepared and manage your time well.
  • Each solution involves multiple AWS services.
  • AWS Certified Solutions Architect – Professional (SAP-C02) exam has 65 questions to be solved in 170 minutes.
  • SAP-C02 exam includes two types of questions, multiple-choice and multiple-response.
  • SAP-C02 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 750.
  • Each question mainly touches multiple AWS services.
  • Associate exams currently cost $ 300 + tax.
  • You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
  • As always, mark the questions for review and move on and come back to them after you are done with all.
  • As always, having a rough architecture or mental picture of the setup helps focus on the areas that you need to improve. Trust me, you will be able to eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach the right answer or at least have a 50% chance of getting it right.
  • AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
  • Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.

AWS Certified Solutions Architect – Professional (SAP-C02) Exam Topics

AWS Certified Solutions Architect – Professional (SAP-C02) focuses a lot on concepts and services related to Architecture & Design, Scalability, High Availability, Disaster Recovery, Migration, Security, and Cost Control.

Storage

  • Simple Storage Service – S3
    • S3 Permissions & S3 Data Protection
      • S3 bucket policies to control access to VPC Endpoints and provide cross-account access.
    • S3 Storage Classes & Lifecycle policies
      • covers S3 Standard, Infrequent access, intelligent tier, and Glacier for archival and object transitions & deletions for cost management.
    • S3 Performance
    • S3 Security
      • S3 supports encryption using KMS
      • S3 supports Object Lock and Glacier supports Vault lock to prevent the deletion of objects, especially required for compliance requirements.
      • CORS allows client web applications loaded in one domain access to the restricted resources to be requested from another domain.
    • S3 supports the same and cross-region replication for disaster recovery.
    • S3 Access Logs enable tracking access requests to an S3 bucket.
    • supports S3 Select feature to query selective data from a single object.
    • S3 Event Notification enables notifications to be triggered when certain events happen in the bucket and support SNS, SQS, and Lambda as the destination.
  • Elastic Block Store
    • EBS Backup using snapshots for HA and Disaster recovery
    • Data Lifecycle Manager can be used to automate the creation, retention, and deletion of snapshots taken to back up the EBS volumes.
  • Storage Gateway
    • supports File Gateways and Volume Gateways
    • File Gateways provides a file interface into S3 and allows storing and retrieving of objects in S3 using industry-standard file protocols such as NFS and SMB.
  • Elastic File System – EFS
    • provides fully managed, scalable, serverless, shared, and cost-optimized file storage for use with AWS and on-premises resources.
    • supports cross-region replication for disaster recovery
    • supports storage classes like S3
    • supports only Linux-based AMIs
  • AWS Transfer Family
    • provides a secure transfer service (FTP, SFTP, FTPs) that helps transfer files into and out of AWS storage services.
    • supports transferring data from or to S3 and EFS.
  • FSx for Lustre
    • managed, cost-effective service to launch and run the HPC high-performance Lustre file system.
  • Understand different use cases for S3 vs EBS vs EFS

Database

  • DynamoDB
    • provides a fully managed NoSQL database service with fast and predictable performance with seamless scalability.
    • supports following capacity modes
      • Provisioned – the maximum amount of capacity in terms of reads/writes per second that an application can consume from a table or index
      • On-demand – serves thousands of requests per second without capacity planning.
    • DynamoDB Auto Scaling can be used to handle peaks or bursts.
    • DynamoDB Streams for tracking changes
    • TTL to expire objects automatically and cost-effectively.
    • Global tables for multi-master, active-active inter-region storage needs.
    • Global tables do not support strong global consistency
    • DynamoDB Accelerator – DAX for seamless caching to reduce the load on DynamoDB for read-heavy requirements.
  • RDS
    • supports cross-region read replicas ideal for disaster recovery with low RTO and RPO.
    • provides RDS proxy for effective database connection polling
    • RDS Multi-AZ vs Read Replicas
  • Aurora
    • fully managed, MySQL- and PostgreSQL-compatible, relational database engine
    • Aurora Serverless provides on-demand, autoscaling configuration.
    • Aurora Global Database consists of one primary AWS Region where the data is mastered, and up to five read-only, secondary AWS Regions.
  • Understand DynamoDB Global Tables vs Aurora Global Databases
  • DocumentDB as a replacement for MongoDB
  • Keyspaces as a replacement for Cassandra

Data Migration & Transfer

  • Cloud Migration Services
    • Cloud Migration (hint: make sure you understand the difference between rehost, replatform, and rearchitect)
    • Server Migration Service helps to migrate servers and applications.
    • Database Migration Service
      • enables quick and secure data migration with minimal to zero downtime
      • supports Full and Change Data Capture – CDC migration to support continuous replication for zero downtime migration.
      • homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations (using SCT) between different database platforms, such as Oracle or Microsoft SQL Server to Aurora.
    • Snow Family
      • Ideal for one-time huge data transfers usually for use cases with limited bandwidth from on-premises to AWS.
    • Understand use cases for data transfer using VPN (quick, slow, uses the Internet), Direct Connect (time to set up, private, recurring transfers), Snow Family (moderate time, private, one-time huge data transfers)
  • Application Discovery Service
    • Agent ones can be used for hyper-v and physical services
    • Agentless can be used for VMware but does not track processes
  • AWS Migration Hub provides a central location to collect server and application inventory data for the assessment, planning, and tracking of migrations to AWS and also helps accelerate application modernization following migration.

Networking & Content Delivery

  • VPC – Virtual Private Cloud
    • Security Groups, NACLs
      • NACLs are stateless and need to open ephemeral ports for response traffic.
    • VPC Gateway Endpoints to provide access to S3 and DynamoDB
    • VPC Interface Endpoints or PrivateLink provide access to a variety of services like SQS, Kinesis, or Private APIs exposed through NLB.
    • VPC Peering to enable communication between VPCs within the same or different regions.
    • VPC Peering does not support overlapping CIDRs while PrivateLink does as only the endpoint is exposed.
    • VPC Flow Logs to track network traffic
    • NAT Gateway provides managed NAT service that provides better availability, higher bandwidth, and requires less administrative effort.
  • Route 53
    • Routing Policies
      • focus on Weighted, Latency, and failover routing policies
      • failover routing provides active-passive configuration for disaster recovery while the others are active-active configurations.
    • Route 53 Resolver
      • Outbound endpoint for AWS -> On-premises DNS query resolution
      • Inbound endpoint for On-premises DNS query resolution
  • CloudFront
    • fully managed, fast CDN service that speeds up the distribution of static, dynamic web or streaming content to end-users.
    • supports Origin Groups for multiple origins providing failover capability with primary and secondary origins.
    • does not support Auto Scaling as an origin
    • supports Geo-restriction
    • supports Lambda@Edge and Cloud Functions to execute code closer to the user.
    • Lambda@Edge can be used for quick auth checks, and redirect users based on request data.
    • Security can be enhanced by whitelisting CloudFront IPs or adding a custom header in CloudFront and verifying it in ALB.
  • API Gateway
    • supports throttling, caching and helps define usage plans with API keys to identify clients
    • provides regional and edge-optimized endpoint types
    • supports CORS for cross-domain calls.
    • supports authentication mechanisms, such as AWS IAM policies, Lambda authorizer functions, and Amazon Cognito user pools.
    • provide serverless architecture with Lambda.
  • Load Balancer – ELB, ALB and NLB
  • Global Accelerator
    • optimizes the path to applications to keep packet loss, jitter, and latency consistently low.
    • helps improve the performance of the applications by lowering first-byte latency
    • provides 2 static IP addresses
    • does not preserve the client’s IP address with NLB
  • Transit Gateway or Transit VPC
    • is a network transit hub that can be used to interconnect VPCs and on-premises networks via Direct Connect or VPN.
    • Transit Gateway is regional and Transit Gateway Peering needs to be configured to peer regional Transit gateways.
  • Placement Groups
    • Cluster placement group with Enhanced Networking for HPC
    • Spread placement group for fault tolerance and high availability.
  • Direct Connect & VPN
    • provide on-premises to AWS connectivity
    • Understand Direct Connect vs VPN
    • VPN can provide a cost-effective, quick failover for Direct Connect.
    • VPN over Direct Connect provides a secure dedicated connection and requires a public virtual interface.
    • Direct Connect Gateway is a global network device that helps establish connectivity that spans VPCs spread across multiple AWS Regions with a single Direct Connect connection.

Security, Identity & Compliance

  • AWS Identity and Access Management
  • AWS Shield & Shield Advanced
    • for DDoS protection and integrates with Route 53, CloudFront, ALB, and Global Accelerator.
  • AWS WAF
    • protects from common attack techniques like SQL injection and XSS, Conditions based include IP addresses, HTTP headers, HTTP body, and URI strings.
    • integrates with CloudFront, ALB, and API Gateway.
    • supports Web ACLs and can block traffic based on IPs, Rate limits, and specific countries as well.
  • ACM – AWS Certificate Manager
    • helps easily provision, manage, and deploy public and private SSL/TLS certificates
    • is regional and you need to request certificates in all regions and associate individually in all regions.
    • does not provide certificates for EC2 instances.
  • AWS KMS – Key Management Service
    • managed encryption service that allows the creation and control of encryption keys to enable data encryption.
    • KMS Multi-region keys
      • are AWS KMS keys in different AWS Regions that can be used interchangeably – as though having the same key in multiple Regions.
      • are not global and each multi-region key needs to be replicated and managed independently.
  • Secrets Manager
    • helps protect secrets needed to access applications, services, and IT resources.
    • Secrets Manager vs SSM Parameter Store.
      • Secrets Manager supports random generation and automatic rotation of secrets, which is not provided by SSM Parameter Store.
      • Costs more than SSM Parameter Store.
  • Amazon Macie is a data security and data privacy service that uses ML and pattern matching to discover and protect sensitive data in S3.
  • AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation.

Compute

  • EC2
  • Auto Scaling provides the ability to ensure a correct number of EC2 instances are always running to handle the load of the application
  • Lambda
    • offers Serverless computing 
    • Lambda running in VPC requires NAT Gateway to communicate with external public services
    • Lambda CPU can be increased by increasing memory only.
    • helps define reserved concurrency limits to reduce the impact
    • Lambda Alias now supports canary deployments
    • Lambda supports docker containers
    • Reserved Concurrency guarantees the maximum number of concurrent instances for the function
    • Provisioned Concurrency provides greater control over the performance of serverless applications and helps keep functions initialized and hyper-ready to respond in double-digit milliseconds.
    • Lambda Best Practices esp. handling the database connection code.
  • Step Functions helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines.
  • ECS – Elastic Container Service
    • container management service that supports Docker containers
    • supports two launch types
      • EC2 and
      • Fargate which provides the serverless capability
    • For least privilege, the role should be assigned to the Task.
    • awsvpc network mode gives ECS tasks the same networking properties as EC2 instances.

Disaster Recovery

  • Disaster Recovery whitepaper, although outdated, make sure you understand the differences and implementation for each type esp. pilot light, warm standby w.r.t RTO, and RPO.
  • Compute
    • Make components available in an alternate region,
    • Backup and Restore using either snapshots or AMIs that can be restored.
    • Use minimal low-scale capacity running which can be scaled once the failover happens
    • Use fully running compute in active-active confirmation with health checks.
    • CloudFormation to create, and scale infra as needed
  • Storage
    • S3 and EFS support cross-region replication
    • DynamoDB supports Global tables for multi-master, active-active inter-region storage needs.
    • Aurora Global Database provides cross-region read replicas and failover capabilities.
    • RDS supports cross-region read replicas which can be promoted to master in case of a disaster. This can be done using Route 53, CloudWatch, and lambda functions.
  • Network
    • Route 53 failover routing with health checks to failover across regions.
    • CloudFront Origin Groups support primary and secondary endpoints with failover.

Management & Governance tools

  • AWS Organizations
  • Systems Manager
    • AWS Systems Manager and its various services like parameter store, patch manager
    • Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management. Does not support secrets rotation. Use Secrets Manager instead
    • Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
    • Patch Manager helps automate the process of patching managed instances with both security-related and other types of updates.
  • CloudWatch
  • CloudTrail
    • for audit and governance
    • With Organizations, the trail can be configured to log CloudTrail from all accounts to a central account.
  • CloudFormation
    • Handle disaster Recovery by automating the infra to replicate the environment across regions.
    • Deletion Policy to prevent, retain, or backup RDS, EBS Volumes
    • Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update. Stack Policy only applies for Stack updates and not stack deletion.
    • StackSets helps to create, update, or delete stacks across multiple accounts and Regions with a single operation.
  • Control Tower
    • to setup, govern, and secure a multi-account environment
    • strongly recommended guardrails cover EBS encryption
  • Service Catalog
    • allows organizations to create and manage catalogues of IT services that are approved for use on AWS with minimal permissions.
  • Trusted Advisor
    • helps with cost optimization and service limits in addition to security, performance and fault tolerance.
  • Compute Optimizer recommends optimal AWS resources for the workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics.
  • AWS Budgets to see usage-to-date and current estimated charges from AWS, set limits and provide alerts or notifications.
  • Cost Allocation Tags can be used to organize AWS resources, and cost allocation tags to track the AWS costs on a detailed level.
  • Cost Explorer helps visualize, understand, manage and forecast the AWS costs and usage over time.
  • Amazon WorkSpaces provides a virtual workspace for varied worker types, especially hybrid and remote workers.

Integration Tools

  • SQS in terms of loose coupling and scaling.
    • Difference between SQS Standard and FIFO esp. with throughput and order
    • SQS supports dead letter queues
  • CloudWatch integration with SNS and Lambda for notifications.

Analytics

  • Kinesis
  • OpenSearch (Elasticsearch) provides a managed search solution.
  • Amazon Timestream is a fast, scalable, and serverless time-series database service that makes it easier to store and analyze trillions of events per day.
  • Amazon Connect is an omnichannel cloud contact center.
  • Amazon Pinpoint is a flexible, scalable marketing communications service that helps connects customers over email, SMS, push notifications or voice
  • Amazon Rekognition offers pre-trained and customizable computer vision capabilities to extract information and insights from images and videos
  • Amazon Transcribe to Voice to Text conversion

Architecture & Design Flows

On the Exam Day

  • Make sure you are relaxed and get some good night’s sleep. The exam is not tough if you are well-prepared.
  • If you are taking the AWS Online exam
    • Try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
    • The online verification process does take some time and usually, there are glitches.
    • Remember, you would not be allowed to take the take if you are late by more than 30 minutes.
    • Make sure you have your desk clear, no hand-watches, or external monitors, keep your phones away, and nobody can enter the room.

Finally, All the Best 🙂