AWS IAM Roles vs Resource Based Policies

AWS IAM Roles vs Resource-Based Policies

AWS allows granting cross-account access to AWS resources, which can be done using IAM Roles or Resource-Based Policies.

IAM Roles

  • Roles can be created to act as a proxy to allow users or services to access resources.
  • Roles support
    • trust policy which helps determine who can access the resources and
    • permission policy which helps to determine what they can access.
  • Users who assume a role temporarily give up their own permissions and instead take on the permissions of the role. The original user permissions are restored when the user exits or stops using the role.
  • Roles can be used to provide access to almost all the AWS resources.
  • Permissions provided to the User through the Role can be further restricted per user by passing an optional policy to the STS request. This policy cannot be used to elevate privileges beyond what the assumed role is allowed to access

Resource-based Policies

  • Resource-based policy allows you to attach a policy directly to the resource you want to share, instead of using a role as a proxy.
  • Resource-based policy specifies the Principal, in the form of a list of AWS account ID numbers, can access that resource and what they can access.
  • Using cross-account access with a resource-based policy, the User still works in the trusted account and does not have to give up their permissions in place of the role permissions.
  • Users can work on the resources from both accounts at the same time and this can be useful for scenarios e.g. copying objects from one bucket to the other bucket in a different AWS account.
  • Resources that you want to share are limited to resources that support resource-based policies
  • Resource-based policies need the trusted account to create users with permissions to be able to access the resources from the trusted account.
  • Only permissions equivalent to, or less than, the permissions granted to your account by the resource owning account can be delegated.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What are the two permission types used by AWS?
    1. Resource-based and Product-based
    2. Product-based and Service-based
    3. Service-based
    4. User-based and Resource-based
  2. What’s the policy used for cross-account access? (Choose 2)
    1. Trust policy
    2. Permissions Policy
    3. Key policy

References

Amazon CloudWatch

Amazon CloudWatch

  • CloudWatch monitors AWS resources and applications in real time.
  • CloudWatch can be used to collect and track metrics, which are the variables to be measured for resources and applications.
  • CloudWatch is basically a metrics repository where the metrics can be inserted and statistics retrieved based on those metrics.
  • In addition to monitoring the built-in metrics that come with AWS, custom metrics can also be monitored
  • CloudWatch provides system-wide visibility into resource utilization, application performance, and operational health.
  • By default, CloudWatch stores the log data indefinitely, and the retention can be changed for each log group at any time.
  • CloudWatch alarms can be configured
    • to send notifications or
    • to automatically make changes to the resources based on defined rules
  • CloudWatch dashboards are customizable home pages in the CloudWatch console used to monitor the resources in a single view, even those resources that are spread across different Regions.
  • CloudWatch Agent helps collect metrics and logs from EC2 instances and on-premises servers and push them to CloudWatch.

CloudWatch Architecture

CloudWatch Architecture
  • CloudWatch collects various metrics from various resources
  • These metrics, as statistics, are available to the user through Console, CLI
  • CloudWatch allows the creation of alarms with defined rules
    • to perform actions to auto-scaling or stop, start, or terminate instances
    • to send notifications using SNS actions on your behalf

CloudWatch Concepts

Namespaces

  • CloudWatch namespaces are containers for metrics.
  • Metrics in different namespaces are isolated from each other, so that metrics from different applications are not mistakenly aggregated into the same statistics.
  • AWS namespaces all follow the convention AWS/<service>, for e.g. AWS/EC2 and AWS/ELB
  • Namespace names must be fewer than 256 characters in length.
  • There is no default namespace. Each data element put into CloudWatch must specify a namespace.

Metrics

  • Metric is the fundamental concept in CloudWatch.
  • Uniquely defined by a name, a namespace, and one or more dimensions.
  • Represents a time-ordered set of data points published to CloudWatch.
  • Each data point has a time stamp, and (optionally) a unit of measure.
  • Data points can be either custom metrics or metrics from other
    services in AWS.
  • Statistics can be retrieved about those data points as an ordered set of time-series data that occur within a specified time window.
  • When the statistics are requested, the returned data stream is identified by namespace, metric name, dimension, and (optionally) the unit.
  • Metrics exist only in the region in which they are created.
  • CloudWatch stores the metric data for two weeks
  • Metrics cannot be deleted, but they automatically expire after 15 months, if no new data is published to them.
  • Metric retention is as follows
    • Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution custom metrics.
    • Data points with a 60 secs (1 min) period are available for 15 days
    • Data points with a 300 secs (5 min) period are available for 63 days
    • Data points with a 3600 secs (1 hour) period are available for 455 days (15 months)

Dimensions

  • A dimension is a name/value pair that uniquely identifies a metric.
  • Every metric has specific characteristics that describe it, and you can think of dimensions as categories for those characteristics.
  • Dimensions help design a structure for the statistics plan.
  • Dimensions are part of the unique identifier for a metric, whenever a unique name pair is added to one of the metrics, a new metric is created.
  • Dimensions can be used to filter result sets that CloudWatch query returns.
  • A metric can be assigned up to ten dimensions to a metric.

Time Stamps

  • Each metric data point must be marked with a time stamp to identify the data point on a time series.
  • Timestamp can be up to two weeks in the past and up to two hours into the future.
  • If no timestamp is provided, a time stamp based on the time the data element was received is created.
  • All times reflect the UTC time zone when statistics are retrieved

Resolution

  • Each metric is one of the following:
    • Standard resolution, with data having a one-minute granularity
    • High resolution, with data at a granularity of one second

Units

  • Units represent the statistic’s unit of measure e.g. count, bytes, %, etc

Statistics

  • Statistics are metric data aggregations over specified periods of time
  • Aggregations are made using the namespace, metric name, dimensions, and the data point unit of measure, within the specified time period

Periods

  • Period is the length of time associated with a specific statistic.
  • Each statistic represents an aggregation of the metrics data collected for a specified period of time.
  • Although periods are expressed in seconds, the minimum granularity for a period is one minute.

Aggregation

  • CloudWatch aggregates statistics according to the period length specified in calls to GetMetricStatistics.
  • Multiple data points can be published with the same or similar time stamps. CloudWatch aggregates them by period length when the statistics about those data points are requested.
  • Aggregated statistics are only available when using detailed monitoring.
  • Instances that use basic monitoring are not included in the aggregates
  • CloudWatch does not aggregate data across regions.

Alarms

  • Alarms can automatically initiate actions on behalf of the user, based on specified parameters.
  • Alarm watches a single metric over a specified time period, and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods.
  • Alarms invoke actions for sustained state changes only i.e. the state must have changed and been maintained for a specified number of periods.
  • Action can be a
    • SNS notification
    • Auto Scaling policies
    • EC2 action – stop or terminate EC2 instances
  • After an alarm invokes an action due to a change in state, its subsequent behavior depends on the type of action associated with the alarm.
    • For Auto Scaling policy notifications, the alarm continues to invoke the action for every period that the alarm remains in the new state.
    • For SNS notifications, no additional actions are invoked.
  • An alarm has three possible states:
    • OK—The metric is within the defined threshold
    • ALARM—The metric is outside of the defined threshold
    • INSUFFICIENT_DATA—Alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state
  • Alarms exist only in the region in which they are created.
  • Alarm actions must reside in the same region as the alarm
  • Alarm history is available for the last 14 days.
  • Alarm can be tested by setting it to any state using the SetAlarmState API (mon-set-alarm-state command). This temporary state change lasts only until the next alarm comparison occurs.
  • Alarms can be disabled and enabled using the DisableAlarmActions and EnableAlarmActions APIs (mon-disable-alarm-actions and mon-enable-alarm-actions commands).

Regions

  • CloudWatch does not aggregate data across regions. Therefore, metrics are completely separate between regions.

Custom Metrics

  • CloudWatch allows publishing custom metrics with put-metric-data CLI command (or its Query API equivalent PutMetricData)
  • CloudWatch creates a new metric if put-metric-data is called with a new metric name,  else it associates the data with the specified existing metric
  • put-metric-data command can only publish one data point per call
  • CloudWatch stores data about a metric as a series of data points and each data point has an associated time stamp
  • Creating a new metric using the put-metric-data command, can take up to two minutes before statistics can be retrieved on the new metric using the get-metric-statistics command and can take up to fifteen minutes before the new metric appears in the list of metrics retrieved using the list-metrics command.
  • CloudWatch allows publishing
    • Single data point
      • Data points can be published with time stamps as granular as one-thousandth of a second, CloudWatch aggregates the data to a minimum granularity of one minute
      • CloudWatch records the average (sum of all items divided by number of items) of the values received for every 1-minute period, as well as number of samples, maximum value, and minimum value for the same time period
      • CloudWatch uses one-minute boundaries when aggregating data points
    • Aggregated set of data points called a statistics set
      • Data can also be aggregated before being published to CloudWatch
      • Aggregating data minimizes the number of calls reducing it to a single call per minute with the statistic set of data
      • Statistics include Sum, Average, Minimum, Maximum, SampleCount
  • If the application produces data that is more sporadic and have periods that have no associated data, either a the value zero (0) or no value at all can be published
  • However, it can be helpful to publish zero instead of no value
    • to monitor the health of your application for e.g. alarm can be configured to notify if no metrics published every 5 minutes
    • to track the total number of data points
    • to have statistics such as minimum and average to include data points with the value 0.

CloudWatch Dashboards

  • CloudWatch dashboards are customizable home pages in the CloudWatch console used to monitor the resources in a single view, even those resources that are spread across different Regions.
  • Dashboards can be used to create customized views of the metrics and alarms for the AWS resources.
  • Dashboards can help to create
    • A single view for selected metrics and alarms to help assess the health of the resources and applications across one or more Regions.
    • An operational playbook that provides guidance for team members during operational events about how to respond to specific incidents.
    • A common view of critical resource and application measurements that can be shared by team members for faster communication flow during operational events.
  • CloudWatch cross-account observability helps monitor and troubleshoot applications that span multiple accounts within a Region.
  • Cross-account observability includes monitoring and source accounts
    • A monitoring account is a central AWS account that can view and interact with observability data generated from source accounts.
    • A source account is an individual AWS account that generates observability data for the resources that reside in it.
    • Source accounts share their observability data with the monitoring account which can include the following types of telemetry:
      • Metrics in CloudWatch
      • Log groups in CloudWatch Logs
      • Traces in AWS X-Ray

CloudWatch Agent

  • CloudWatch Agent helps collect metrics and logs from EC2 instances and on-premises servers and push them to CloudWatch.
  • Logs collected by the unified agent are processed and stored in CloudWatch Logs.

CloudWatch Logs

Refer blog post @ CloudWatch Logs

CloudWatch Supported Services

Refer blog post @ CloudWatch Supported Services

Accessing CloudWatch

  • CloudWatch can be accessed using
    • AWS CloudWatch console
    • CloudWatch CLI
    • AWS CLI
    • CloudWatch API
    • AWS SDKs

 

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this? Choose 2 answers
    1. Amazon Simple Email Service (Cannot be integrated with CloudWatch directly)
    2. Amazon CloudWatch
    3. Amazon Simple Queue Service
    4. Amazon Route 53
    5. Amazon Simple Notification Service
  2. A customer needs to capture all client connection information from their load balancer every five minutes. The company wants to use this data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements?
    1. Enable AWS CloudTrail for the load balancer.
    2. Enable access logs on the load balancer. (Refer link)
    3. Install the Amazon CloudWatch Logs agent on the load balancer.
    4. Enable Amazon CloudWatch metrics on the load balancer (does not provide Client connection information)
  3. A user is running a batch process on EBS backed EC2 instances. The batch process starts a few instances to process Hadoop Map reduce jobs, which can run between 50 – 600 minutes or sometimes for more time. The user wants to configure that the instance gets terminated only when the process is completed. How can the user configure this with CloudWatch?
    1. Setup the CloudWatch action to terminate the instance when the CPU utilization is less than 5%
    2. Setup the CloudWatch with Auto Scaling to terminate all the instances
    3. Setup a job which terminates all instances after 600 minutes
    4. It is not possible to terminate instances automatically
  4. A user has two EC2 instances running in two separate regions. The user is running an internal memory management tool, which captures the data and sends it to CloudWatch in US East, using a CLI with the same namespace and metric. Which of the below mentioned options is true with respect to the above statement?
    1. The setup will not work as CloudWatch cannot receive data across regions
    2. CloudWatch will receive and aggregate the data based on the namespace and metric
    3. CloudWatch will give an error since the data will conflict due to two sources
    4. CloudWatch will take the data of the server, which sends the data first
  5. A user is sending the data to CloudWatch using the CloudWatch API. The user is sending data 90 minutes in the future. What will CloudWatch do in this case?
    1. CloudWatch will accept the data
    2. It is not possible to send data of the future
    3. It is not possible to send the data manually to CloudWatch
    4. The user cannot send data for more than 60 minutes in the future
  6. A user is having data generated randomly based on a certain event. The user wants to upload that data to CloudWatch. It may happen that event may not have data generated for some period due to randomness. Which of the below mentioned options is a recommended option for this case?
    1. For the period when there is no data, the user should not send the data at all
    2. For the period when there is no data the user should send a blank value
    3. For the period when there is no data the user should send the value as 0 (Refer User Guide)
    4. The user must upload the data to CloudWatch as having no data for some period will cause an error at CloudWatch monitoring
  7. A user has a weighing plant. The user measures the weight of some goods every 5 minutes and sends data to AWS CloudWatch for monitoring and tracking. Which of the below mentioned parameters is mandatory for the user to include in the request list?
    1. Value
    2. Namespace (refer put-metric request)
    3. Metric Name
    4. Timezone
  8. A user has a refrigerator plant. The user is measuring the temperature of the plant every 15 minutes. If the user wants to send the data to CloudWatch to view the data visually, which of the below mentioned statements is true with respect to the information given above?
    1. The user needs to use AWS CLI or API to upload the data
    2. The user can use the AWS Import Export facility to import data to CloudWatch
    3. The user will upload data from the AWS console
    4. The user cannot upload data to CloudWatch since it is not an AWS service metric
  9. A user has launched an EC2 instance. The user is planning to setup the CloudWatch alarm. Which of the below mentioned actions is not supported by the CloudWatch alarm?
    1. Notify the Auto Scaling launch config to scale up
    2. Send an SMS using SNS
    3. Notify the Auto Scaling group to scale down
    4. Stop the EC2 instance
  10. A user has a refrigerator plant. The user is measuring the temperature of the plant every 15 minutes. If the user wants to send the data to CloudWatch to view the data visually, which of the below mentioned statements is true with respect to the information given above?
    1. The user needs to use AWS CLI or API to upload the data
    2. The user can use the AWS Import Export facility to import data to CloudWatch
    3. The user will upload data from the AWS console
    4. The user cannot upload data to CloudWatch since it is not an AWS service metric
  11. A user is trying to aggregate all the CloudWatch metric data of the last 1 week. Which of the below mentioned statistics is not available for the user as a part of data aggregation?
    1. Aggregate
    2. Sum
    3. Sample data
    4. Average
  12. A user has setup a CloudWatch alarm on an EC2 action when the CPU utilization is above 75%. The alarm sends a notification to SNS on the alarm state. If the user wants to simulate the alarm action how can he achieve this?
    1. Run activities on the CPU such that its utilization reaches above 75%
    2. From the AWS console change the state to ‘Alarm’
    3. The user can set the alarm state to ‘Alarm’ using CLI
    4. Run the SNS action manually
  13. A user is publishing custom metrics to CloudWatch. Which of the below mentioned statements will help the user understand the functionality better?
    1. The user can use the CloudWatch Import tool
    2. The user should be able to see the data in the console after around 15 minutes
    3. If the user is uploading the custom data, the user must supply the namespace, timezone, and metric name as part of the command
    4. The user can view as well as upload data using the console, CLI and APIs
  14. An application that you are managing has EC2 instances and DynamoDB tables deployed to several AWS Regions. In order to monitor the performance of the application globally, you would like to see two graphs 1) Avg CPU Utilization across all EC2 instances and 2) Number of Throttled Requests for all DynamoDB tables. How can you accomplish this? [PROFESSIONAL]
    1. Tag your resources with the application name, and select the tag name as the dimension in the CloudWatch Management console to view the respective graphs (CloudWatch metrics are regional)
    2. Use the CloudWatch CLI tools to pull the respective metrics from each regional endpoint. Aggregate the data offline & store it for graphing in CloudWatch.
    3. Add SNMP traps to each instance and DynamoDB table. Leverage a central monitoring server to capture data from each instance and table. Put the aggregate data into CloudWatch for graphing (Can’t add SNMP traps to DynamoDB as it is a managed service)
    4. Add a CloudWatch agent to each instance and attach one to each DynamoDB table. When configuring the agent set the appropriate application name & view the graphs in CloudWatch. (Can’t add agents to DynamoDB as it is a managed service)
  15. You have set up Individual AWS accounts for each project. You have been asked to make sure your AWS Infrastructure costs do not exceed the budget set per project for each month. Which of the following approaches can help ensure that you do not exceed the budget each month? [PROFESSIONAL]
    1. Consolidate your accounts so you have a single bill for all accounts and projects (Consolidation will not help limit per account)
    2. Set up auto scaling with CloudWatch alarms using SNS to notify you when you are running too many Instances in a given account (many instances do not directly map to cost and would not give exact cost)
    3. Set up CloudWatch billing alerts for all AWS resources used by each project, with a notification occurring when the amount for each resource tagged to a particular project matches the budget allocated to the project. (as each project already has a account, no need for resource tagging)
    4. Set up CloudWatch billing alerts for all AWS resources used by each account, with email notifications when it hits 50%. 80% and 90% of its budgeted monthly spend
  16. You meet once per month with your operations team to review the past month’s data. During the meeting, you realize that 3 weeks ago, your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency on your 3-tier web service API. You use DynamoDB for the database layer, ELB, EBS, and EC2 for the business logic tier, and SQS, ELB, and EC2 for the presentation layer. Which of the following techniques will NOT help you figure out what happened?
    1. Check your CloudTrail log history around the spike’s time for any API calls that caused slowness.
    2. Review CloudWatch Metrics graphs to determine which component(s) slowed the system down. (Metrics data was available for 2 weeks before, however it has been extended now)
    3. Review your ELB access logs in S3 to see if any ELBs in your system saw the latency.
    4. Analyze your logs to detect bursts in traffic at that time.
  17. You have a high security requirement for your AWS accounts. What is the most rapid and sophisticated setup you can use to react to AWS API calls to your account?
    1. Subscription to AWS Config via an SNS Topic. Use a Lambda Function to perform in-flight analysis and reactivity to changes as they occur.
    2. Global AWS CloudTrail setup delivering to S3 with an SNS subscription to the deliver notifications, pushing into a Lambda, which inserts records into an ELK stack for analysis.
    3. Use a CloudWatch Rule ScheduleExpression to periodically analyze IAM credential logs. Push the deltas for events into an ELK stack and perform ad-hoc analysis there.
    4. CloudWatch Events Rules, which trigger based on all AWS API calls, submitting all events to an AWS Kinesis Stream for arbitrary downstream analysis. (CloudWatch Events allow subscription to AWS API calls, and direction of these events into Kinesis Streams. This allows a unified, near real-time stream for all API calls, which can be analyzed with any tool(s). Refer link)
  18. To monitor API calls against our AWS account by different users and entities, we can use ____ to create a history of calls in bulk for later review, and use ____ for reacting to AWS API calls in real-time.
    1. AWS Config; AWS Inspector
    2. AWS CloudTrail; AWS Config
    3. AWS CloudTrail; CloudWatch Events (CloudTrail is a batch API call collection service, CloudWatch Events enables real-time monitoring of calls through the Rules object interface. Refer link)
    4. AWS Config; AWS Lambda
  19. You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO? [PROFESSIONAL]
    1. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues. (is not fast in search and introduces delay)
    2. Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc MapReduce analysis and write new queries when needed. (is not fast in search and introduces delay)
    3. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues. (is not fast in search and introduces delay)
    4. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (ELK – Elasticsearch, Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)
  20. Your EC2-Based Multi-tier application includes a monitoring instance that periodically makes application -level read only requests of various application components and if any of those fail more than three times 30 seconds calls CloudWatch to fire an alarm, and the alarm notifies your operations team by email and SMS of a possible application health problem. However, you also need to watch the watcher -the monitoring instance itself – and be notified if it becomes unhealthy. Which of the following is a simple way to achieve that goal? [PROFESSIONAL]
    1. Run another monitoring instance that pings the monitoring instance and fires a could watch alarm mat notifies your operations team should the primary monitoring instance become unhealthy.
    2. Set a CloudWatch alarm based on EC2 system and instance status checks and have the alarm notify your operations team of any detected problem with the monitoring instance.
    3. Set a CloudWatch alarm based on the CPU utilization of the monitoring instance and nave the alarm notify your operations team if C r the CPU usage exceeds 50% few more than one minute: then have your monitoring application go into a CPU-bound loop should it Detect any application problems.
    4. Have the monitoring instances post messages to an SOS queue and then dequeue those messages on another instance should the queue cease to have new messages, the second instance should first terminate the original monitoring instance start another backup monitoring instance and assume (the role of the previous monitoring instance and beginning adding messages to the SQS queue.
  21.  

AWS Application Discovery Service

AWS Application Discovery Service Agentless vs Agent

AWS Application Discovery Service

  • AWS Application Discovery Service helps plan migration to the AWS cloud by collecting usage and configuration data about the on-premises servers.
  • helps enterprises obtain a snapshot of the current state of their data center servers by collecting server specification information, hardware configuration, performance data, details of running processes, and network connections
  • is integrated with AWS Migration Hub,
    • which simplifies migration tracking as it aggregates migration status information into a single console.
    • can help view the discovered servers, group them into applications, and then track the migration status of each application.
  • discovered data for all the regions is stored in the AWS Migration Hub home Region.
  • The data can be exported for analysis in Microsoft Excel or AWS analysis tools such as Amazon Athena and Amazon QuickSight.
  • supports both agent and agentless-based on-premises tooling, in addition to file-based import for performing discovery and collecting data about the on-premises servers.

Application Discovery Service Modes

Agentless discovery

  • is an on-premises application that collects information through agentless methods.
  • can be performed by deploying the Agentless Collector (OVA file) through the VMware vCenter.
  • After Agentless Collector is configured,
    • it identifies VMs and hosts associated with vCenter.
    • collects the following static configuration data: Server hostnames, IP addresses, MAC addresses, and disk resource allocations.
    • Additionally, it collects the utilization data for each VM and computes average and peak utilization for metrics such as CPU, RAM, and Disk I/O.

Agent-based discovery

  • can be performed by deploying the Application Discovery Agent on each of the VMs and physical servers.
  • supports most Windows and Linux operating systems.
  • can be deployed on physical on-premises servers, EC2 instances, and virtual machines.
  • collects static configuration data, detailed time-series system-performance information, inbound and outbound network connections, and processes that are running.
  • pings the Discovery Service at 15-minute intervals for configuration information.
  • transmits data securely to the Discovery Service using TLS encryption.

AWS Application Discovery Service Agentless vs Agent

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is migrating its on-premises systems to AWS. The user environment consists of the following systems:
    • Windows and Linux virtual machines running on VMware.
    • Physical servers running Red Hat Enterprise Linux.
    The company wants to be able to perform the following steps before migrating to AWS:
    • Identify dependencies between on-premises systems.
    • Group systems together into applications to build migration plans.
    How can these requirements be met?

    1. Install the AWS Systems Manager Discovery Agent on each of the on-premises systems.
    2. Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems.
    3. Install the AWS Application Discovery Service Discovery Connector on each of the on-premises systems and in VMware vCenter.
    4. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Install the AWS Application Discovery Service Discovery Connector in VMware vCenter.

References

AWS Simple Notification Service – SNS

SNS Delivery Protocols

Simple Notification Service – SNS

  • Simple Notification Service – SNS is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients.
  • SNS provides the ability to create a Topic which is a logical access point and communication channel.
  • Each topic has a unique name that identifies the SNS endpoint for publishers to post messages and subscribers to register for notifications.
  • Producers and Consumers communicate asynchronously with subscribers by producing and sending a message on a topic.
  • Producers push messages to the topic, they created or have access to, and SNS matches the topic to a list of subscribers who have subscribed to that topic and delivers the message to each of those subscribers.
  • Subscribers receive all messages published to the topics to which they subscribe, and all subscribers to a topic receive the same messages.
  • Subscribers (i.e., web servers, email addresses, SQS queues, AWS Lambda functions) consume or receive the message or notification over one of the supported protocols (i.e., SQS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic.

SNS Delivery Protocols

Accessing SNS

  • Amazon Management console
    • Amazon Management console is the web-based user interface that can be used to manage SNS
  • AWS Command-line Interface (CLI)
    • Provides commands for a broad set of AWS products, and is supported on Windows, Mac, and Linux.
  • AWS Tools for Windows Powershell
    • Provides commands for a broad set of AWS products for those who script in the PowerShell environment
  • AWS SNS Query API
    • Query API allows for requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action
  • AWS SDK libraries
    • AWS provides libraries in various languages which provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses

SNS Supported Transport Protocols

  • HTTP, HTTPS – Subscribers specify a URL as part of the subscription registration; notifications will be delivered through an HTTP POST to the specified URL.
  • Email, Email-JSON – Messages are sent to registered addresses as email. Email-JSON sends notifications as a JSON object, while Email sends text-based email.
  • SQS – Users can specify an SQS queue as the endpoint; SNS will enqueue a notification message to the specified queue (which subscribers can then process using SQS APIs such as ReceiveMessage, DeleteMessage, etc.)
  • SMS – Messages are sent to registered phone numbers as SMS text messages

SNS Supported Endpoints

  • Email Notifications
    • SNS provides the ability to send Email notifications
  • Mobile Push Notifications
    • SNS provides an ability to send push notification messages directly to apps on mobile devices. Push notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts
    • Supported push notification services
      • Amazon Device Messaging (ADM)
      • Apple Push Notification Service (APNS)
      • Google Cloud Messaging (GCM)
      • Windows Push Notification Service (WNS) for Windows 8+ and Windows Phone 8.1+
      • Microsoft Push Notification Service (MPNS) for Windows Phone 7+
      • Baidu Cloud Push for Android devices in China
  • SQS Queues
    • SNS with SQS provides the ability for messages to be delivered to applications that require immediate notification of an event, and also persist in an SQS queue for other applications to process at a later time
    • SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates.
    • SQS can be used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components, without requiring each component to be concurrently available.
  • SMS Notifications
    • SNS provides the ability to send and receive Short Message Service (SMS) notifications to SMS-enabled mobile phones and smart phones
  • HTTP/HTTPS Endpoints
    • SNS provides the ability to send notification messages to one or more HTTP or HTTPS endpoints.When you subscribe an endpoint to a topic, you can publish a notification to the topic and Amazon SNS sends an HTTP POST request delivering the contents of the notification to the subscribed endpoint
  • Lambda
    • SNS and Lambda are integrated so Lambda functions can be invoked with SNS notifications.
    • When a message is published to an SNS topic that has a Lambda function subscribed to it, the Lambda function is invoked with the payload of the published message
  • Kinesis Data Firehose
    • Deliver events to delivery streams for archiving and analysis purposes.
    • Through delivery streams, events can be delivered to AWS destinations like S3, Redshift, and OpenSearch Service, or to third-party destinations such as Datadog, New Relic, MongoDB, and Splunk.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following notification endpoints or clients does Amazon Simple Notification Service support? Choose 2 answers
    1. Email
    2. CloudFront distribution
    3. File Transfer Protocol
    4. Short Message Service
    5. Simple Network Management Protocol
  2. What happens when you create a topic on Amazon SNS?
    1. The topic is created, and it has the name you specified for it.
    2. An ARN (Amazon Resource Name) is created
    3. You can create a topic on Amazon SQS, not on Amazon SNS.
    4. This question doesn’t make sense.
  3. A user has deployed an application on his private cloud. The user is using his own monitoring tool. He wants to configure that whenever there is an error, the monitoring tool should notify him via SMS. Which of the below mentioned AWS services will help in this scenario?
    1. None because the user infrastructure is in the private cloud/
    2. AWS SNS
    3. AWS SES
    4. AWS SMS
  4. A user wants to make so that whenever the CPU utilization of the AWS EC2 instance is above 90%, the redlight of his bedroom turns on. Which of the below mentioned AWS services is helpful for this purpose?
    1. AWS CloudWatch + AWS SES
    2. AWS CloudWatch + AWS SNS
    3. It is not possible to configure the light with the AWS infrastructure services
    4. AWS CloudWatch and a dedicated software turning on the light
  5. A user is trying to understand AWS SNS. To which of the below mentioned end points is SNS unable to send a notification?
    1. Email JSON
    2. HTTP
    3. AWS SQS
    4. AWS SES
  6. A user is running a webserver on EC2. The user wants to receive the SMS when the EC2 instance utilization is above the threshold limit. Which AWS services should the user configure in this case?
    1. AWS CloudWatch + AWS SES
    2. AWS CloudWatch + AWS SNS
    3. AWS CloudWatch + AWS SQS
    4. AWS EC2 + AWS CloudWatch
  7. A user is planning to host a mobile game on EC2 which sends notifications to active users on either high score or the addition of new features. The user should get this notification when he is online on his mobile device. Which of the below mentioned AWS services can help achieve this functionality?
    1. AWS Simple Notification Service
    2. AWS Simple Queue Service
    3. AWS Mobile Communication Service
    4. AWS Simple Email Service
  8. You are providing AWS consulting service for a company developing a new mobile application that will be leveraging amazon SNS push for push notifications. In order to send direct notification messages to individual devices each device registration identifier or token needs to be registered with SNS, however the developers are not sure of the best way to do this. You advise them to: –
    1. Bulk upload the device tokens contained in a CSV file via the AWS Management Console
    2. Let the push notification service (e.g. Amazon Device messaging) handle the registration
    3. Implement a token vending service to handle the registration
    4. Call the CreatePlatformEndpoint API function to register multiple device tokens. (Refer documentation)
  9. A company is running a batch analysis every hour on their main transactional DB running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible?
    1. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
    2. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
    3. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
    4. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
  10. Which of the following are valid SNS delivery transports? Choose 2 answers.
    1. HTTP
    2. UDP
    3. SMS
    4. DynamoDB
    5. Named Pipes
  11. What is the format of structured notification messages sent by Amazon SNS?
    1. An XML object containing MessageId, UnsubscribeURL, Subject, Message and other values
    2. An JSON object containing MessageId, DuplicateFlag, Message and other values
    3. An XML object containing MessageId, DuplicateFlag, Message and other values
    4. An JSON object containing MessageId, unsubscribeURL, Subject, Message and other values
  12. Which of the following are valid arguments for an SNS Publish request? Choose 3 answers.
    1. TopicAm
    2. Subject
    3. Destination
    4. Format
    5. Message
    6. Language

References

AWS Migration Hub

AWS Migration Hub

AWS Migration Hub

  • AWS Migration Hub provides a centralized, single place to discover the existing servers, plan migrations, and track the status of each application migration.
  • provides visibility into the application portfolio and streamlines planning and tracking.
  • helps visualize the connections and the status of the migrating servers and databases, regardless of which migration tool is used.
  • stores all the data in the selected Home Region and provides a single repository of discovery and migration planning information for the entire portfolio and a single view of migrations into multiple AWS Regions.
  • helps track the status of the migrations in all AWS Regions, provided the migration tools are available in that Region.
  • helps understand the environment by letting you explore information collected by AWS discovery tools and stored in the AWS Application Discovery Service’s repository.
  • supports migration status updates from the following tools:
  • migration tools send migration status to the selected Home Region
  • supports EC2 instance recommendations, that provide you with the ability to estimate the cost of running the existing servers in AWS.
  • supports Strategy Recommendations, that help easily build a migration and modernization strategy for the applications running on-premises or in AWS.

AWS Migration Hub

Migration Hub’s Strategy Recommendations

  • AWS Migration Hub’s Strategy Recommendations help easily build a migration and modernization strategy for the applications running on-premises or in AWS.
  • Strategy Recommendations provides guidance on the strategy and tools that help you migrate and modernize at scale.
  • Strategy Recommendations supports analysis for potential rehost (EC2) and replatform (managed environments such as RDS and Elastic BeanStalk, Containers, and OS upgrades) options for applications running on Windows Server 2003 or above or a wide variety of Linux distributions, including Ubuntu, RedHat, Oracle Linux, Debian, and Fedora.
  • Strategy Recommendations offers additional refactor analysis for custom applications written in C# and Java, and licensed databases (such as Microsoft SQL Server and Oracle).

EC2 Instance Recommendations

  • EC2 instance recommendations help analyze the data collected from each on-premises server, including server specification, CPU, and memory utilization, to recommend the most cost-effective, least expensive EC2 instance required to run the on-premises workload.
  • EC2 instance recommendations can be fine-tuned by specifying preferences for AWS purchasing options, AWS Region, EC2 instance type exclusions, and CPU/RAM utilization metric (average, peak, or percentile).

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants to migrate its on-premises data center to the AWS Cloud. This includes thousands of virtualized Linux and Microsoft Windows servers, SAN storage, Java and PHP applications with MYSQL, and Oracle databases. There are many department services hosted either in the same data center or externally.
    The technical documentation is incomplete and outdated. A solutions architect needs to understand the current environment and estimate the cloud resource costs after the migration.
    Which tools or services should be used to plan the cloud migration (Choose TWO.)

    1. AWS Application Discovery Service
    2. AWS SMS
    3. AWS X-Ray
    4. Amazon Inspector
    5. AWS Migration Hub

References

AWS_Migration_Hub

AWS Cloud Migration Services

AWS Cloud Migration Services

  • AWS Cloud Migration services help to address a lot of common use cases such as
    • cloud migration,
    • disaster recovery,
    • data center decommission, and
    • content distribution.
  • For migrating data from on-premises to AWS, the major aspect for consideration are
    • amount of data and network speed
    • data security in transit
    • existing application knowledge for recreation

Application & Database Cloud Migration Services

AWS Migration Hub

  • provides a centralized, single place to discover the existing servers, plan migrations, and track the status of each application migration.
  • provides visibility into the application portfolio and streamlines planning and tracking.
  • helps visualize the connections and the status of the migrating servers and databases, regardless of which migration tool is used.
  • stores all the data in the selected Home Region and provides a single repository of discovery and migration planning information for the entire portfolio and a single view of migrations into multiple AWS Regions.
  • helps track the status of the migrations in all AWS Regions, provided the migration tools are available in that Region.
  • helps understand the environment by letting you explore information collected by AWS discovery tools and stored in the AWS Application Discovery Service’s repository.
  • supports migration status updates from the following tools:
  • migration tools send migration status to the selected Home Region
  • supports EC2 instance recommendations, that provide you with the ability to estimate the cost of running the existing servers in AWS.
  • supports Strategy Recommendations, that help easily build a migration and modernization strategy for the applications running on-premises or in AWS.

AWS Application Discovery Service

  • AWS Application Discovery Service helps plan migration to the AWS cloud by collecting usage and configuration data about the on-premises servers.
  • helps enterprises obtain a snapshot of the current state of their data center servers by collecting server specification information, hardware configuration, performance data, details of running processes, and network connections
  • is integrated with AWS Migration Hub,
    • which simplifies migration tracking as it aggregates migration status information into a single console.
    • can help view the discovered servers, group them into applications, and then track the migration status of each application.
  • discovered data for all the regions is stored in the AWS Migration Hub home Region.
  • The data can be exported for analysis in Microsoft Excel or AWS analysis tools such as Amazon Athena and Amazon QuickSight.
  • supports both agent and agentless-based on-premises tooling, in addition to file-based import for performing discovery and collecting data about the on-premises servers.

AWS Server Migration Service (SMS)

  • is an agentless service that makes it easier and faster to migrate thousands of on-premises workloads to AWS.
  • helps automate, schedule, and track incremental replications of live server volumes, making it easier to coordinate large-scale server migrations.
  • currently supports migration of virtual machines from VMware vSphere,  Windows Hyper-V and Azure VM to AWS
  • supports migrating Windows Server 2003, 2008, 2012, and 2016, and Windows 7, 8, and 10; Red Hat Enterprise Linux (RHEL), SUSE/SLES, CentOS, Ubuntu, Oracle Linux, Fedora, and Debian Linux OS
  • replicates each server volume, which is saved as a new AMI, which can be launched as an EC2 instance
  • is a significant enhancement of EC2 VM Import/Export service
  • is used to Re-host

AWS Database Migration Service (DMS)

  • helps migrate databases to AWS quickly and securely.
  • source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
  • supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora.
  • monitors for replication tasks, network or host failures, and automatically provisions a host replacement in case of failures that can’t be repaired
  • supports both one-time data migration into RDS and EC2-based databases as well as for continuous data replication
  • supports continuous replication of the data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3
  • provides free AWS Schema Conversion Tool (SCT) that automates the conversion of Oracle PL/SQL and SQL Server T-SQL code to equivalent code in the Amazon Aurora / MySQL dialect of SQL or the equivalent PL/pgSQL code in PostgreSQL

AWS EC2 VM Import/Export

  • allows easy import of virtual machine images from existing environment to EC2 instances and export them back to on-premises environment
  • allows leveraging of existing investments in the virtual machines, built to meet compliance requirements, configuration management and IT security by bringing those virtual machines into EC2 as ready-to-use instances
  • Common usages include
    • Migrate Existing Applications and Workloads to EC2, allowing preserving of the software and settings configured in the existing VMs.
    • Copy Your VM Image Catalog to EC2
    • Create a Disaster Recovery Repository for your VM images

Data Transfer Services

VPN

  • connection utilizes IPSec to establish encrypted network connectivity between on-premises network and VPC over the Internet.
  • connections can be configured in minutes and a good solution for an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
  • still requires internet and be configured using VGW and CGW

AWS Direct Connect

  • provides a dedicated physical connection between the corporate network and AWS Direct Connect location with no data transfer over the Internet.
  • helps bypass Internet service providers (ISPs) in the network path
  • helps reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than with Internet-based connection
  • takes time to setup and involves third parties
  • are not redundant and would need another direct connect connection or a VPN connection
  •  Security
    • provides a dedicated physical connection without internet
    • For additional security can be used with VPN

AWS Import/Export (upgraded to Snowball)

  • accelerates moving large amounts of data into and out of AWS using secure Snowball appliances
  • AWS transfers the data directly onto and off of the storage devices using Amazon’s high-speed internal network, bypassing the Internet
  • Data Migration
    • for significant data size, AWS Import/Export is faster than Internet transfer is and more cost-effective than upgrading the connectivity
    • if loading the data over the Internet would take a week or more, AWS Import/Export should be considered
    • data from appliances can be imported to S3, Glacier and EBS volumes and exported from S3
    • not suitable for applications that cannot tolerate offline transfer time
  •  Security
    • Snowball uses an industry-standard Trusted Platform Module (TPM) that has a dedicated processor designed to detect any unauthorized modifications to the hardware, firmware, or software to physically secure the AWS Snowball device.

Snow Family

  • AWS Snowball
    • is a petabyte-scale data transfer service built around a secure suitcase-sized device that moves data into and out of the AWS Cloud quickly and efficiently.
    • transfers the data to S3 bucket
    • transfer times are about a week from start to finish.
    • are commonly used to ship terabytes or petabytes of analytics data, healthcare and life sciences data, video libraries, image repositories, backups, and archives as part of data center shutdown, tape replacement, or application migration projects.
  • AWS Snowball Edge devices
    • contain slightly larger capacity and an embedded computing platform that helps perform simple processing tasks.
    • can be rack shelved and may also be clustered together, making it simpler to collect and store data in extremely remote locations.
    • commonly used in environments with intermittent connectivity (such as manufacturing, industrial, and transportation); or in extremely remote locations (such as military or maritime operations) before shipping them back to AWS data centers.
    • delivers serverless computing applications at the network edge using AWS Greengrass and Lambda functions.
    • common use cases include capturing IoT sensor streams, on-the-fly media transcoding, image compression, metrics aggregation and industrial control signaling and alarming.
  • AWS Snowmobile
    • moves up to 100PB of data (equivalent to 1,250 AWS Snowball devices) in a 45-foot long ruggedized shipping container and is ideal for multi-petabyte or Exabyte-scale digital media migrations and datacenter shutdowns.
    • arrives at the customer site and appears as a network-attached data store for more secure, high-speed data transfer. After data is transferred to Snowmobile, it is driven back to an AWS Region where the data is loaded into S3.
    • is tamper-resistant, waterproof, and temperature controlled with multiple layers of logical and physical security — including encryption, fire suppression, dedicated security personnel, GPS tracking, alarm monitoring, 24/7 video surveillance, and an escort security vehicle during transit.

AWS Storage Gateway

  • connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and the AWS storage infrastructure
  • provides low-latency performance by maintaining frequently accessed data on-premises while securely storing all of the data encrypted in S3 or Glacier.
  • for disaster recovery scenarios, Storage Gateway, together with EC2, can serve as a cloud-hosted solution that mirrors the entire production environment
  • Data Migration
    • with gateway-cached volumes, S3 can be used to hold primary data while frequently accessed data is cached locally for faster access reducing the need to scale on premises storage infrastructure
    • with gateway-stored volumes, entire data is stored locally while asynchronously backing up data to S3
    • with gateway-VTL, offline data archiving can be performed by presenting existing backup application with an iSCSI-based VTL consisting of a virtual media changer and virtual tape drives
  •  Security
    • Encrypts all data in transit to and from AWS by using SSL/TLS.
    • All data in AWS Storage Gateway is encrypted at rest using AES-256.
    • Authentication between the gateway and iSCSI initiators can be secured by using Challenge-Handshake Authentication Protocol (CHAP).

Simple Storage Service – S3

  • Data Transfer
    • Files up to 5GB can be transferred using single operation
    • Multipart uploads can be used to upload files up to 5 TB and speed up data uploads by dividing the file into multiple parts
    • transfer rate still limited by the network speed
  •  Security
    • Data in transit can be secured by using SSL/TLS or client-side encryption.
    • Encrypt data at-rest by performing server-side encryption using Amazon S3-Managed Keys (SSE-S3), AWS Key Management Service (KMS)-Managed Keys (SSE-KMS), or Customer Provided Keys (SSE-C). Or by performing client-side encryption using AWS KMS–Managed Customer Master Key (CMK) or Client-Side Master Key.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your must architect the migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location. What options could you select to migrate the application to AWS? (Choose 2)
    1. Create an AWS Elastic Beanstalk application using the custom web server platform. Specify the web server executable and the application project and source files. Enable log file rotation to Amazon Simple Storage Service (S3). (EB does not work with Custom server executable)
    2. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a custom layer. Create custom recipes to install Docker and to deploy your Docker container using the Dockerfile. Create custom recipes to install and configure the application to publish the logs to Amazon CloudWatch Logs (although this is one of the option, the last sentence mentions configure the application to push the logs to S3, which would need changes to application as it needs to use SDK or CLI)
    3. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a Docker layer that uses the Dockerfile. Create custom recipes to install and configure Amazon Kinesis to publish the logs into Amazon CloudWatch. (Kinesis not needed)
    4. Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Use Docker configuration with awslogs and EB with Docker)
    5. Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI. (Use VM Import/Export to create AMI and CloudWatch logs agent to log)
  2. Your company hosts an on-premises legacy engineering application with 900GB of data shared via a central file server. The engineering data consists of thousands of individual files ranging in size from megabytes to multiple gigabytes. Engineers typically modify 5-10 percent of the files a day. Your CTO would like to migrate this application to AWS, but only if the application can be migrated over the weekend to minimize user downtime. You calculate that it will take a minimum of 48 hours to transfer 900GB of data using your company’s existing 45-Mbps Internet connection. After replicating the application’s environment in AWS, which option will allow you to move the application’s data to AWS without losing any data and within the given timeframe?
    1. Copy the data to Amazon S3 using multiple threads and multi-part upload for large files over the weekend, and work in parallel with your developers to reconfigure the replicated application environment to leverage Amazon S3 to serve the engineering files. (Still limited by 45 Mbps speed with minimum 48 hours when utilized to max)
    2. Sync the application data to Amazon S3 starting a week before the migration, on Friday morning perform a final sync, and copy the entire data set to your AWS file server after the sync completes. (Works best as the data changes can be propagated over the week and are fractional and downtime would be know)
    3. Copy the application data to a 1-TB USB drive on Friday and immediately send overnight, with Saturday delivery, the USB drive to AWS Import/Export to be imported as an EBS volume, mount the resulting EBS volume to your AWS file server on Sunday. (Downtime is not known when the data upload would be done, although Amazon says the same day the package is received)
    4. Leverage the AWS Storage Gateway to create a Gateway-Stored volume. On Friday copy the application data to the Storage Gateway volume. After the data has been copied, perform a snapshot of the volume and restore the volume as an EBS volume to be attached to your AWS file server on Sunday. (Still uses the internet)
  3. You are tasked with moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there’s no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose 3 answers)
    1. An AWS Direct Connect link between the VPC and the network housing the internal services
    2. An Internet Gateway to allow a VPN connection. (Virtual and Customer gateway is needed)
    3. An Elastic IP address on the VPC instance
    4. An IP address space that does not conflict with the one on-premises
    5. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies’ IP addresses
    6. A VM Import of the current virtual machine
  4. An enterprise runs 103 line-of-business applications on virtual machines in an on-premises data center. Many of the applications are simple PHP, Java, or Ruby web applications, are no longer actively developed, and serve little traffic. Which approach should be used to migrate these applications to AWS with the LOWEST infrastructure costs?
    1. Deploy the applications to single-instance AWS Elastic Beanstalk environments without a load balancer.
    2. Use AWS SMS to create AMIs for each virtual machine and run them in Amazon EC2.
    3. Convert each application to a Docker image and deploy to a small Amazon ECS cluster behind an Application Load Balancer.
    4. Use VM Import/Export to create AMIs for each virtual machine and run them in single-instance AWS Elastic Beanstalk environments by configuring a custom image.

References

AWS VPN

AWS VPC VPN

  • AWS VPN connections are used to extend on-premises data centers to AWS.
  • VPN connections provide secure IPSec connections between the data center or branch office and the AWS resources.
  • AWS Site-to-Site VPN or AWS Hardware VPN or AWS Managed VPN
    • Connectivity can be established by creating an IPSec, hardware VPN connection between the VPC and the remote network.
    • On the AWS side of the VPN connection, a Virtual Private Gateway (VGW) provides two VPN endpoints for automatic failover.
    • On the customer side, a customer gateway (CGW) needs to be configured, which is the physical device or software application on the remote side of the VPN connection
  • AWS Client VPN
    • AWS Client VPN is a managed client-based VPN service that enables secure access to AWS resources and resources in the on-premises network.
  • AWS VPN CloudHub
    • For more than one remote network e.g. multiple branch offices, multiple AWS hardware VPN connections can be created via the VPC to enable communication between these networks
  • AWS Software VPN
    • A VPN connection can be created to the remote network by using an EC2 instance in the VPC that’s running a third-party software VPN appliance.
    • AWS does not provide or maintain third-party software VPN appliances; however, there is a range of products provided by partners and open source communities.
  • AWS Direct Connect provides a dedicated private connection from a remote network to the VPC. Direct Connect can be combined with an AWS hardware VPN connection to create an IPsec-encrypted connection

VPN Components

AWS VPN Components

  • Virtual Private Gateway – VGW
    • A virtual private gateway is the VPN concentrator on the AWS side of the VPN connection
  • Customer Gateway – CGW
    • A customer gateway is a physical device or software application on the customer side of the VPN connection.
    • When a VPN connection is created, the VPN tunnel comes up when traffic is generated from the remote side of the VPN connection.
    • By default, VGW is not the initiator; CGW must bring up the tunnels for the Site-to-Site VPN connection by generating traffic and initiating the Internet Key Exchange (IKE) negotiation process.
    • If the VPN connection experiences a period of idle time, usually 10  seconds, depending on the configuration, the tunnel may go down. To prevent this, a network monitoring tool to generate keepalive pings; for e.g. by using IP SLA.
  • Transit Gateway
    • A transit gateway is a transit hub that can be used to interconnect VPCs and on-premises networks.
    • A Site-to-Site VPN connection on a transit gateway can support either IPv4 traffic or IPv6 traffic inside the VPN tunnels.
  • A Site-to-Site VPN connection offers two VPN tunnels between a VGW or a transit gateway on the AWS side, and a CGW (which represents a VPN device) on the remote (on-premises) side.

VPN Routing Options

  • For a VPN connection, the route table for the subnets should be updated with the type of routing (static or dynamic) that you plan to use.
  • Route tables determine where network traffic is directed. Traffic destined for the VPN connections must be routed to the virtual private gateway.
  • The type of routing can depend on the make and model of the CGW device
    • Static Routing
      • If your device does not support BGP, specify static routing.
      • Using static routing, the routes (IP prefixes) can be specified that should be communicated to the virtual private gateway.
      • Devices that don’t support BGP may also perform health checks to assist failover to the second tunnel when needed.
    • BGP Dynamic Routing
      • If the VPN device supports Border Gateway Protocol (BGP), specify dynamic routing with the VPN connection.
      • When using a BGP device, static routes need not be specified to the VPN connection because the device uses BGP for auto-discovery and to advertise its routes to the virtual private gateway.
      • BGP-capable devices are recommended as the BGP protocol offers robust liveness detection checks that can assist failover to the second VPN tunnel if the first tunnel goes down.
  • Only IP prefixes known to the virtual private gateway, either through BGP advertisement or static route entry, can receive traffic from the VPC.
  • Virtual private gateway does not route any other traffic destined outside of the advertised BGP, static route entries, or its attached VPC CIDR.

VPN Route Priority

  • Longest prefix match applies.
  • If the prefixes are the same, then the VGW prioritizes routes as follows, from most preferred to least preferred:
    • BGP propagated routes from an AWS Direct Connect connection
    • Manually added static routes for a Site-to-Site VPN connection
    • BGP propagated routes from a Site-to-Site VPN connection
    • Prefix with the shortest AS PATH is preferred for matching prefixes where each Site-to-Site VPN connection uses BGP
    • Path with the lowest multi-exit discriminators (MEDs) value is preferred when the AS PATHs are the same length and if the first AS in the AS_SEQUENCE is the same across multiple paths.

VPN Limitations

  • supports only IPSec tunnel mode. Transport mode is currently not supported.
  • supports only one VGW can be attached to a VPC at a time.
  • does not support IPv6 traffic on a virtual private gateway.
  • does not support Path MTU Discovery.
  • does not support overlapping CIDR blocks for the networks. It is recommended to use non-overlapping CIDR blocks.
  • does not support transitive routing. So for traffic from on-premises to AWS via a virtual private gateway, it
    • does not support Internet connectivity through Internet Gateway
    • does not support Internet connectivity through NAT Gateway
    • does not support VPC Peered resources access through VPC Peering
    • does not support S3, DynamoDB access through VPC Gateway Endpoint
    • However, Internet connectivity through NAT instance and VPC Interface Endpoint or PrivateLink services are accessible.
  • provides a bandwidth of 1.25 Gbps, currently.

VPN Monitoring

  • AWS Site-to-Site VPN automatically sends notifications to the AWS AWS Health Dashboard
  • AWS Site-to-Site VPN is integrated with CloudWatch with the following metrics available
    • TunnelState
      • The state of the tunnels.
      • For static VPNs, 0 indicates DOWN and 1 indicates UP.
      • For BGP VPNs, 1 indicates ESTABLISHED and 0 is used for all other states.
      • For both types of VPNs, values between 0 and 1 indicate at least one tunnel is not UP.
    • TunnelDataIn
      • The bytes received on the AWS side of the connection through the VPN tunnel from a customer gateway.
    • TunnelDataOut
      • The bytes sent from the AWS side of the connection through the VPN tunnel to the customer gateway.

VPN Connection Redundancy

VPN Connection Redundancy

  • A VPN connection is used to connect the customer network to a VPC.
  • Each VPN connection has two tunnels to help ensure connectivity in case one of the VPN connections becomes unavailable, with each tunnel using a unique virtual private gateway public IP address.
  • Both tunnels should be configured for redundancy.
  • When one tunnel becomes unavailable, for e.g. down for maintenance, network traffic is automatically routed to the available tunnel for that specific VPN connection.
  • To protect against a loss of connectivity in case the customer gateway becomes unavailable, a second VPN connection can be set up to the VPC and virtual private gateway by using a second customer gateway.
  • Customer gateway IP address for the second VPN connection must be publicly accessible.
  • By using redundant VPN connections and CGWs, maintenance on one of the customer gateways can be performed while traffic continues to flow over the second customer gateway’s VPN connection.
  • Dynamically routed VPN connections using the Border Gateway Protocol (BGP) are recommended, if available, to exchange routing information between the customer gateways and the virtual private gateways.
  • Statically routed VPN connections require static routes for the network to be entered on the customer gateway side.
  • BGP-advertised and statically entered route information allow gateways on both sides to determine which tunnels are available and reroute traffic if a failure occurs.

Multiple Site-to-Site VPN Connections

VPN Connection

  • VPC has an attached virtual private gateway, and the remote network includes a customer gateway, which must be configured to enable the
    VPN connection.
  • Routing must be set up so that any traffic from the VPC bound for the remote network is routed to the virtual private gateway.
  • Each VPN has two tunnels associated with it that can be configured on the customer router, as is not a single point of failure
  • Multiple VPN connections to a single VPC can be created, and a second CGW can be configured to create a redundant connection to the same external location or to create VPN connections to multiple geographic locations.

VPN CloudHub

  • VPN CloudHub can be used to provide secure communication between multiple on-premises sites if you have multiple VPN connections
  • VPN CloudHub operates on a simple hub-and-spoke model using a Virtual Private gateway in a detached mode that can be used without a VPC.
  • Design is suitable for customers with multiple branch offices and existing
    Internet connections who’d like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices

VPN CloudHub Architecture

  • VPN CloudHub architecture with blue dashed lines indicates network
    traffic between remote sites being routed over their VPN connections.
  • AWS VPN CloudHub requires a virtual private gateway with multiple customer gateways.
  • Each customer gateway must use a unique Border Gateway Protocol (BGP) Autonomous System Number (ASN)
  • Customer gateways advertise the appropriate routes (BGP prefixes) over their VPN connections.
  • Routing advertisements are received and re-advertised to each BGP peer, enabling each site to send data to and receive data from the other sites.
  • Routes for each spoke must have unique ASNs and the sites must not have overlapping IP ranges.
  • Each site can also send and receive data from the VPC as if they were using a standard VPN connection.
  • Sites that use AWS Direct Connect connections to the virtual private gateway can also be part of the AWS VPN CloudHub.
  • To configure the AWS VPN CloudHub,
    • multiple customer gateways can be created, each with the unique public IP address of the gateway and the ASN.
    • a VPN connection can be created from each customer gateway to a common virtual private gateway.
    • each VPN connection must advertise its specific BGP routes. This is done using the network statements in the VPN configuration files for the VPN connection.

VPN vs Direct Connect

AWS Direct Connect vs VPN

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You have in total 5 offices, and the entire employee-related information is stored under AWS VPC instances. Now all the offices want to connect the instances in VPC using VPN. Which of the below help you to implement this?
    1. you can have redundant customer gateways between your data center and your VPC
    2. you can have multiple locations connected to the AWS VPN CloudHub
    3. You have to define 5 different static IP addresses in route table.
    4. 1 and 2
    5. 1,2 and 3
  2. You have in total of 15 offices, and the entire employee-related information is stored under AWS VPC instances. Now all the offices want to connect the instances in VPC using VPN. What problem do you see in this scenario?
    1. You can not create more than 1 VPN connections with single VPC (Can be created)
    2. You can not create more than 10 VPN connections with single VPC (soft limit can be extended)
    3. When you create multiple VPN connections, the virtual private gateway can not sends network traffic to the appropriate VPN connection using statically assigned routes. (Can route the traffic to correct connection)
    4. Statically assigned routes cannot be configured in case of more than 1 VPN with the virtual private gateway. (can be configured)
    5. None of above
  3. You have been asked to virtually extend two existing data centers into AWS to support a highly available application that depends on existing, on-premises resources located in multiple data centers and static content that is served from an Amazon Simple Storage Service (S3) bucket. Your design currently includes a dual-tunnel VPN connection between your CGW and VGW. Which component of your architecture represents a potential single point of failure that you should consider changing to make the solution more highly available?
    1. Add another VGW in a different Availability Zone and create another dual-tunnel VPN connection.
    2. Add another CGW in a different data center and create another dual-tunnel VPN connection. (Refer link)
    3. Add a second VGW in a different Availability Zone, and a CGW in a different data center, and create another dual-tunnel.
    4. No changes are necessary: the network architecture is currently highly available.
  4. You are designing network connectivity for your fat client application. The application is designed for business travelers who must be able to connect to it from their hotel rooms, cafes, public Wi-Fi hotspots, and elsewhere on the Internet. You do not want to publish the application on the Internet. Which network design meets the above requirements while minimizing deployment and operational costs? [PROFESSIONAL]
    1. Implement AWS Direct Connect, and create a private interface to your VPC. Create a public subnet and place your application servers in it. (High Cost and does not minimize deployment)
    2. Implement Elastic Load Balancing with an SSL listener that terminates the back-end connection to the application. (Needs to be published to internet)
    3. Configure an IPsec VPN connection, and provide the users with the configuration details. Create a public subnet in your VPC, and place your application servers in it. (Instances still in public subnet are internet accessible)
    4. Configure an SSL VPN solution in a public subnet of your VPC, then install and configure SSL VPN client software on all user computers. Create a private subnet in your VPC and place your application servers in it. (Cost effective and can be in private subnet as well)
  5. You are designing a connectivity solution between on-premises infrastructure and Amazon VPC Your server’s on-premises will De communicating with your VPC instances You will De establishing IPSec tunnels over the internet You will be using VPN gateways and terminating the IPsec tunnels on AWS-supported customer gateways. Which of the following objectives would you achieve by implementing an IPSec tunnel as outlined above? (Choose 4 answers) [PROFESSIONAL]
    1. End-to-end protection of data in transit
    2. End-to-end Identity authentication
    3. Data encryption across the Internet
    4. Protection of data in transit over the Internet
    5. Peer identity authentication between VPN gateway and customer gateway
    6. Data integrity protection across the Internet
  6. A development team that is currently doing a nightly six-hour build which is lengthening over time on-premises with a large and mostly under utilized server would like to transition to a continuous integration model of development on AWS with multiple builds triggered within the same day. However, they are concerned about cost, security and how to integrate with existing on-premises applications such as their LDAP and email servers, which cannot move off-premises. The development environment needs a source code repository; a project management system with a MySQL database resources for performing the builds and a storage location for QA to pick up builds from. What AWS services combination would you recommend to meet the development team’s requirements? [PROFESSIONAL]
    1. A Bastion host Amazon EC2 instance running a VPN server for access from on-premises, Amazon EC2 for the source code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIP for the source code repository and project management system, Amazon SQL for a build queue, An Amazon Auto Scaling group of Amazon EC2 instances for performing builds and Amazon Simple Email Service for sending the build output. (Bastion is not for VPN connectivity also SES should not be used)
    2. An AWS Storage Gateway for connecting on-premises software applications with cloud-based storage securely, Amazon EC2 for the resource code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, Amazon Simple Notification Service for a notification initiated build, An Auto Scaling group of Amazon EC2 instances for performing builds and Amazon S3 for the build output. (Storage Gateway does provide secure connectivity but still needs VPN. SNS alone cannot handle builds)
    3. An AWS Storage Gateway for connecting on-premises software applications with cloud-based storage securely, Amazon EC2 for the resource code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, Amazon SQS for a build queue, An Amazon Elastic Map Reduce (EMR) cluster of Amazon EC2 instances for performing builds and Amazon CloudFront for the build output. (Storage Gateway does not provide secure connectivity, still needs VPN. EMR is not ideal for performing builds as it needs normal EC2 instances)
    4. A VPC with a VPN Gateway back to their on-premises servers, Amazon EC2 for the source-code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, SQS for a build queue, An Auto Scaling group of EC2 instances for performing builds and S3 for the build output. (VPN gateway is required for secure connectivity. SQS for build queue and EC2 for builds)

References

AWS VPC Interface Endpoints – PrivateLink

AWS Interface Endpoints - PrivateLinks

VPC Interface Endpoints – PrivateLink

  • VPC Interface endpoints enable connectivity to services powered by AWS PrivateLink.
  • Services include AWS services like CloudTrail, CloudWatch, etc., services hosted by other AWS customers and partners in their own VPCs (referred to as endpoint services), and supported AWS Marketplace partner services.
  • VPC Interface Endpoints only allow traffic from VPC resources to the endpoints and not vice versa
  • PrivateLink endpoints can be accessed across both intra- and inter-region VPC peering connections, Direct Connect, and VPN connections.
  • VPC Interface Endpoints, by default, have an address like vpce-svc-01234567890abcdef.us-east-1.vpce.amazonaws.com which needs application changes to point to the service.
  • Private DNS name feature allows consumers to use AWS service public default DNS names which would point to the private VPC endpoint service.
  • Interface Endpoints can be used to create custom applications in VPC and configure them as an AWS PrivateLink-powered service (referred to as an endpoint service) exposed through a Network Load Balancer.
  • Custom applications can be hosted within AWS or on-premises (via Direct Connect or VPN)

AWS Interface Endpoints - PrivateLinks

 

Interface Endpoints Configuration

  • Create an interface endpoint, and provide the name of the AWS service, endpoint service, or AWS Marketplace service
  • Choose the subnet to use the interface endpoint by creating an endpoint network interface.
  • An endpoint network interface is assigned a private IP address from the IP address range of the subnet and keeps this IP address until the interface endpoint is deleted
  • A private IP address also ensures the traffic remains private without any changes to the route table.

VPC Endpoint policy

  • VPC Endpoint policy is an IAM resource policy attached to an endpoint for controlling access from the endpoint to the specified service.
  • Endpoint policy, by default, allows full access to any user or service within the VPC, using credentials from any AWS account to any S3 resource; including S3 resources for an AWS account other than the account with which the VPC is associated
  • Endpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket policies).
  • Endpoint policy can be used to restrict which specific resources can be accessed using the VPC Endpoint.

Interface Endpoint Limitations

  • For each interface endpoint, only one subnet per AZ can be selected.
  • Interface Endpoint supports TCP traffic only.
  • Endpoints are supported within the same region only.
  • Endpoints support IPv4 traffic only.
  • Each interface endpoint can support a bandwidth of up to 10 Gbps per AZ, by default, and automatically scales to 40 Gbps. Additional capacity may be added by reaching out to AWS support.
  • NACLs for the subnet can restrict traffic, and needs to be configured properly
  • Endpoints cannot be transferred from one VPC to another, or from one service to another.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An application server needs to be in a private subnet without access to the internet. The solution must retrieve and upload data to an Amazon Kinesis. How should a Solutions Architect design a solution to meet these requirements?
    1. Use Amazon VPC Gateway endpoints
    2. Use a NAT Gateway
    3. Use Amazon VPC Interface endpoints
    4. Use a private Amazon Kinesis Data Stream

References

AWS_PrivateLink