AWS ELB Application Load Balancer – Certification

AWS ELB Application Load Balancer

  • An Application Load Balancer is a load balancing option for the ELB service that operates at the layer 7 (application layer) and allows defining routing rules based on content across multiple services or containers running on one or more EC2 instances.

Application Load Balancer Benefits

  • Support for Path-based routing,  where listener rules can be configured to forward requests based on the URL in the request. This enables structuring application as smaller services, and route requests to the correct service based on the content of the URL.
  • Support for routing requests to multiple services on a single EC2 instance by registering the instance using multiple ports.
  • Support for containerized applications. EC2 Container Service (ECS) can select an unused port when scheduling a task and register the task with a target group using this port, enabling efficient use of the clusters.
  • Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.

Application Load Balancer Features

  • supports load balancing of applications using HTTP and HTTPS (Secure HTTP) protocols
  • supports HTTP/2, which is enabled natively. Clients that support HTTP/2 can connect over TLS
  • supports WebSockets and Secure WebSockets natively
  • supports Request tracing, by default
  • supports Sticky Sessions (Session Affinity) using load balancer generated cookies, to route requests from the same client to the same target
  • supports SSL termination, to decrypt the request on ALB before sending it to the underlying targets.
  • supports layer 7 specific features like X-Forwarded-For headers to help determine the actual client IP, port and protocol
  • automatically scales its request handling capacity in response to incoming application traffic.
  • provides High Availability, by allowing you to specify more than one AZ and distribution of incoming traffic across multiple AZs.
  • integrates with ACM to provision and bind a SSL/TLS certificate to the load balancer thereby making the entire SSL offload process very easy
  • supports IPv6 addressing, for an Internet facing load balancer
  • supports Request Tracking, where in a new custom identifier “X-Amzn-Trace-Id” HTTP header is injected on all requests to help track in the request flow across various services
  • supports Security Groups to control the traffic allowed to and from the load balancer.
  • provides Access Logs, to record all requests sent the load balancer, and store the logs in S3 for later analysis in compressed format
  • provides Delete Protection, to prevent the ALB from accidental deletion
  • supports Connection Idle Timeout – ALB maintains two connections for each request one with the Client (front end) and one with the target instance (back end). If no data has been sent or received by the time that the idle timeout period elapses, ALB closes the front-end connection
  • integrates with CloudWatch to provide metrics such as request counts, error counts, error types, and request latency
  • integrates with AWS WAF, a web application firewall that helps protect web applications from attacks by allowing rules configuration based on IP addresses, HTTP headers, and custom URI strings
  • integrates with CloudTrail to receive a history of ALB API calls made on the AWS account

Application Load Balancer Listeners

  • A listener is a process that checks for connection requests, using the configured protocol and port
  • Listener supports HTTP & HTTPS protocol with Ports from 1-65535
  • ALB supports SSL Termination for HTTPS listener, which helps to offload the work of encryption and decryption so that the targets can focus on their main work.
  • HTTPS listener supports exactly one SSL server certificate on the listener.
  • WebSockets with both HTTP and HTTPS listeners (Secure WebSockets)
  • Supports HTTP/2 with HTTPS listeners
    • 128 requests can be sent in parallel using one HTTP/2 connection.
    • ALB converts these to individual HTTP/1.1 requests and distributes them across the healthy targets in the target group using the round robin routing algorithm.
    • HTTP/2 uses front-end connections more efficiently resulting in fewer connections between clients and the load balancer.
    • Server-push feature of HTTP/2 is not supported
  • Each listener has a default rule, and can optionally define additional rules.
  • Each rule consists of a priority, action, optional host condition, and optional path condition.
    • Priority – Rules are evaluated in priority order, from the lowest value to the highest value. The default rule has lowest priority
    • Action – Each rule action has a type and a target group. Currently, the only supported type is forward, which forwards requests to the target group. You can change the target group for a rule at any time.
    • Condition – There are two types of rule conditions: host and path. When the conditions for a rule are met, then its action is taken
  • Host Condition or Host-based routing
    • Host conditions can be used to define rules that forward requests to different target groups based on the host name in the host header
    • This enables support for multiple domains using a single ALB for e.g. orders.example.com, images.example.com, registration.example.com
    • Each host condition has one hostname. If the hostname in
  • Path Condition or path-based routing
    • Path conditions can be used to define rules that forward requests to different target groups based on the URL in the request
    • Each path condition has one path pattern for e.g. example.com/orders, example.com/images, example.com/registration
    • If the URL in a request matches the path pattern in a listener rule exactly, the request is routed using that rule.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

ELB_Application_Load_Balancer

AWS Classic Load Balancer vs Application Load Balancer – Certification

AWS Classic Load Balancer vs Application Load Balancer

Elastic Load Balancing supports two types of load balancers: Application Load Balancers and Classic Load Balancers. While there is some overlap in the features, AWS does not maintain feature parity between the two types of load balancers. Content below lists down the feature comparison for both.

Usage Pattern

  • A Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances,
  • Application Load Balancer is ideal for microservices or container-based architectures where there is a need to route traffic to multiple services or load balance across multiple ports on the same EC2 instance.

AWS ELB Classic Load Balancer vs Application Load Balancer
Supported Protocols

  • Classic Load Balancer operates at layer 4 and supports HTTP, HTTPS, TCP, SSL while Application Load Balancer operates at layer 7 and supports HTTP, HTTPS, HTTP/2, WebSockets
  • If Layer-4 features are needed, Classic Load Balancers should be used

Supported Platforms

  • Classic Load Balancer supports both EC2-Classic and EC2-VPC while Application Load Balancer supports only EC2-VPC

Stick Sessions (Cookies)

  • Stick Sessions (Session Affinity) enables the load balancer to bind a user’s session to a specific instance, which ensures that all requests from the user during the session are sent to the same instance
  • Both Classic & Application Load Balancer supports sticky sessions to maintain session affinity

Idle Connection Timeout

  • Idle Connection Timeout helps specify a time period, which ELB uses to close the connection if no data has been sent or received by the time that the idle timeout period elapses
  • Both Classic & Application Load Balancer supports idle connection timeout

Connection Draining

  • Connection draining enables the load balancer to complete in-flight requests made to instances that are de-registering or unhealthy
  • Both Classic & Application Load Balancer supports connection draining

SSL Termination

  • Both Classic Load Balancer and ALB support SSL Termination to decrypt requests from clients before sending them to targets and hence reducing the load. SSL certificate must be installed on the load balancer.

Back-end Server Authentication

  • Back-end Server Authentication enables authentication of the instances. Load balancer communicates with an instance only if the public key that the instance presents to the load balancer matches a public key in the authentication policy for the load balancer.
  • Classic Load Balancer supports while Application Load Balancer does not support Back-end Server Authentication

Cross-zone Load Balancing

  • Cross-zone Load Balancing help distribute incoming requests evenly across all instances in its enabled AZs. By default, Load Balancer will evenly distribute requests evenly across its enabled AZs, irrespective of the instances it hosts.
  • Both Classic & Application Load Balancer both support Cross-zone load balancing, however for Classic it needs to be enabled while for ALB it is always enabled

Health Checks

  • Both Classic & Application Load Balancer both support Health checks to determine if the instance is healthy or unhealthy
  • ALB provides health check improvements that allow detailed error codes from 200-399 to be configured

CloudWatch Metrics

  • Both Classic & Application Load Balancer integrate with CloudWatch to provide metrics, with ALB providing additional metrics

Access Logs

  • Access logs capture detailed information about requests sent to the load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses
  • Both Classic & Application Load Balancer provide access logs, with ALB providing additional attributes

Host-based Routing & Path-based Routing

  • Host-based routing use host conditions to define rules that forward requests to different target groups based on the host name in the host header. This enables ALB to support multiple domains using a single load balancer.
  • Path-based routing use path conditions to define rules that forward requests to different target groups based on the URL in the request. Each path condition has one path pattern. If the URL in a request matches the path pattern in a listener rule exactly, the request is routed using that rule.
  • Only ALB supports Host-based & Path-based routing.

Dynamic Ports

  • Only ALB supports Dynamic Port Mapping with ECS, which allows two containers of a service to run on a single server on dynamic ports that ALB automatically detects and reconfigures itself.

Deletion Protection

  • Only ALB supports Deletion Protection, wherein a load balancer can’t be deleted if deletion protection is enabled

Request Tracing

  • Only ALB supports Request Tracing to track HTTP requests from clients to targets or other services.

IPv6 in VPC

  • Only ALB supports IPv6 in VPC

AWS WAF

  • Only ALB supports AWS WAF, which can be directly used on ALBs (both internal and external) in a VPC, to protect websites and web services

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

AWS API Gateway – Certification

AWS API Gateway

  • AWS API Gateway is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale
  • API Gateway handles all of the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.
  • API Gateway has no minimum fees or startup costs and charges only for the API calls received and the amount of data transferred out.
  • API Gateway acts as a proxy to the configured backend operations.
  • API Gateway scales automatically to handle the amount of traffic the API receives
  • API Gateway expose HTTPS endpoints only for all the APIs created does not support unencrypted (HTTP) endpoints
  • APIs built on API Gateway can accept any payloads sent over HTTP with typical data formats include JSON, XML, query string parameters, and request headers.
  • API Gateway can communicate to multiple backends
    • Lambda functions
    • AWS Step functions state machines
    • HTTP endpoints exposed through Elastic Beanstalk, ELB or EC2 instances
    • non-AWS hosted HTTP based operations accessible via public Internet
  • Amazon API Gateway endpoints are always public to the Internet and does not run within an VPC. Proxy requests to backend operations also need to be publicly accessible on the Internet.

API Gateway

API Gateway helps with several aspects of creating and managing APIs

  • Metering
    • automatically meters traffic to your APIs and lets you extract utilization data for each API key.
    • define plans that meter, restrict third-party developer access, configure throttling, and quota limits on a per API key basis
  • Security
    • helps removing authorization concerns from the backend code
    • allows you to leverage AWS administration and security tools, such as IAM and Cognito, to authorize access to APIs
    • can verify signed API calls on your behalf using the same methodology AWS uses for its own APIs
    • supports custom authorizers written as Lambda functions and verify incoming bearer tokens
    • automatically protects the backend systems from distributed denial-of-service (DDoS) attacks, whether attacked with counterfeit requests (Layer 7) or SYN floods (Layer 3).
  • Resiliency
    • helps manage traffic with throttling so that backend operations can withstand traffic spikes
    • helps improve the performance of the APIs and the latency end users experience by caching the output of API calls to avoid calling the backend every time.
  • Operations Monitoring
    • integrates with CloudWatch and provides a metrics dashboard to monitor calls to API services
    • integrates with CloudWatch Logs to receive error, access or debug logs
    • provides with backend performance metrics covering API calls, latency data and error rates.
  • Lifecycle Management
    • allows multiple API versions and multiple stages (development, staging, production etc.) for each version simultaneously so that existing applications can continue to call previous versions after new API versions are published.
    • saves the history of the deployments, which allows rollback of a stage to a previous deployment at any point, using APIs or console
  • Designed for Developers
    • allows you to specify a mapping template to generate static content to be returned, helping you mock APIs before the backend is ready
    • helps reduce cross-team development effort and time-to-market for applications and allow dependent teams to begin development while backend processes is still built

API Gateway Throttling and Caching

API Gateway Throttling and Caching

  • Throttling
    • API Gateway provides throttling at multiple levels including global and by service call and limits can be set for standard rates and bursts
    • It tracks the number of requests per second. Any requests over the limit will receive a 429 HTTP response
    • Throttling ensures that API traffic is controlled to help the backend services maintain performance and availability.
  • Caching
    • API Gateway provides API result caching by provisioning an API Gateway cache and specifying its size in gigabytes
    • Caching helps improve performance and reduces the traffic sent to the back end
  • API Gateway handles the request in the following manner
    • If caching is not enabled and throttling limits have not been applied, then all requests pass through to the backend service until the account level throttling limits are reached.
    • If throttling limits specified, then API Gateway will shed necessary amount of requests and send only the defined limit to the back-end
    • If a cache is configured, then API Gateway will return a cached response for duplicate requests for a customizable time, but only if under configured throttling limits
  •  and does not arbitrarily limit or throttle invocations to the backend operations and all requests that are not intercepted by throttling and caching settings are sent to your backend operations.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_API_Gateway_Developer_Guide

AWS Lambda – Certification

AWS Lambda

  • AWS Lambda offers Serverless computing that allows you to build and run applications and services without thinking about servers, which are managed by AWS
  • Lambda lets you run code without provisioning or managing servers, where you pay only for the compute time when the code is running.
  • Lambda is priced on a pay per use basis and there are no charges when the code is not running
  • Lambda allows you to run code for any type of application or backend service with zero administration
  • Lambda performs all the operational and administrative activities on your behalf, including capacity provisioning, monitoring fleet health, applying security patches to the underlying compute resources, deploying code, running a web service front end, and monitoring and logging the code.
  • Lambda provides easy scaling and high availability to your code without additional effort on your part.
  • Lambda does not provide access to the underlying compute infrastructure
  • Lambda is designed to process events within milliseconds. Latency will be higher immediately after a Lambda function is created, updated, or if it has not been used recently.
  • Lambda is designed to use replication and redundancy to provide high availability for both the service itself and for the Lambda functions it operates. There are no maintenance windows or scheduled downtimes for either.
  • Lambda stores code in S3 and encrypts it at rest and performs additional integrity checks while the code is in use.
  • Lambda supports code written in Node.js (JavaScript), Python, Java (Java 8 compatible), and C# (.NET Core)
  • All calls made to AWS Lambda must complete execution within 300 seconds. The default timeout is 3 seconds, but you can set the timeout to any value between 1 and 300 seconds.

Lambda Functions & Event Sources

Core components of Lambda are Lambda functions and event sources.

  • An event source is the AWS service or custom application that publishes events
  • Lambda function is the custom code that processes the events

Lambda Functions

  • Each Lambda function has associated configuration information, such as its name, description, entry point, and resource requirements
  • Lambda functions should be stateless, to allow AWS Lambda launch as many copies of the function as needed as per the demand. State can be maintained externally in DynamoDB or S3
  • Each Lambda function receives 500MB of non-persistent disk space in its own /tmp directory.
  • Lambda functions have the following restrictions
    • Inbound network connections are blocked by AWS Lambda
    • Outbound connections only TCP/IP sockets are supported
    • ptrace (debugging) system calls are blocked
    • TCP port 25 traffic is also blocked as an anti-spam measure.
  • Lambda automatically monitors Lambda functions, reporting real-time metrics through CloudWatch, including total requests, latency, error rates, and throttled requests
  • Lambda automatically integrates with Amazon CloudWatch logs, creating a log group for each Lambda function and providing basic application lifecycle event log entries, including logging the resources consumed for each use of that function
  • Each AWS Lambda function has a single, current version of the code and there is no versioning of the same function. However, versioning can be implemented using Aliases.
    • Each Lambda function version has a unique ARN and after it is published it is immutable (that is, it can’t be changed).
    • Lambda supports creating aliases for each Lambda function versions.
    • Conceptually, an AWS Lambda alias is a pointer to a specific Lambda function version, but it is also a resource similar to a Lambda function, and each alias has a unique ARN.
    • Each alias maintains an ARN for a function version to which it points
    • An alias can only point to a function version, not to another alias
    • Unlike versions, which are immutable, aliases are mutable (that is, they can be changed) and can be updated to point to different versions
  • For failures, Lambda functions being invoked asynchronously are retried twice. Events from Kinesis and DynamoDB streams are retried until the Lambda function succeeds or the data expires. Kinesis and DynamoDB Streams retain data for a minimum of 24 hours.

Lambda Event Sources

Refer Blog Post – Lambda Event Source

Lambda Best Practices

  • Lambda function code should be stateless, and ensure there is no affinity between the code and the underlying compute infrastructure.
  • Instantiate AWS clients outside the scope of the handler to take advantage of connection re-use.
  • Make sure you have set +rx permissions on your files in the uploaded ZIP to ensure Lambda can execute code on your behalf.
  • Lower costs and improve performance by minimizing the use of startup code not directly related to processing the current event.
  • Use the built-in CloudWatch monitoring of your Lambda functions to view and optimize request latencies.
  • Delete old Lambda functions that you are no longer using.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 400 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?
    1. Your API Gateway deployment is throttling your requests.
    2. Your AWS API Gateway Deployment is bottlenecking on request (de)serialization.
    3. You did not request a limit increase on concurrent Lambda function executions. (Refer link – AWS API Gateway by default throttles at 500 requests per second steady-state, and 1000 requests per second at spike. Lambda, by default, throttles at 100 concurrent requests for safety. At 500 milliseconds (half of a second) per request, you can expect to support 200 requests per second at 100 concurrency. This is less than the 400 requests per second your system now requires. Make a limit increase request via the AWS Support Console.)
    4. You used Consistent Read requests on DynamoDB and are experiencing semaphore lock.

AWS Lambda Event Source – Certification

AWS Lambda Event Source

  • Core components of Lambda are Lambda functions and event sources.
    • An AWS Lambda event source is the AWS service or custom application that publishes events
    • Lambda function is the custom code that processes the events
  • An event source is an AWS service or developer-created application that produces events that trigger an AWS Lambda function to run
  • Supported event sources refer to those AWS services that can be preconfigured to work with AWS Lambda for e.g., S3, SNS, SES etc
  • Event sources can be either AWS Services or Custom applications

Lambda Event Source Mapping

  • Lambda Event source mapping refers to the configuration which maps an event source to a Lambda function.
  • Event source mapping enables automatic invocation of the Lambda function when events occur.
  • Each event source mapping identifies the type of events to publish and the Lambda function to invoke when events occur
  • AWS supported event sources can grouped into
    • Regular AWS services
      • also referred to as Push model
      • includes services like S3, SNS, SES etc.
      • event source mapping maintained on their side
      • as the event sources invoke the Lambda function, resource-based policy should be used to grant the event source necessary permissions
    • Stream-based event sources
      • also referred to as Pull model
      • includes services like DynamoDB & Kinesis streams
      • need to have the event source mapping maintained on the Lambda side

Lambda Supported Event Sources

AWS Lambda can be configured as an event source for multiple AWS services

Amazon S3

  • S3 bucket events, such as the object-created or object-deleted events can be processed using Lambda functions for e.g., Lambda function can be invoke when a user uploads a photo to a bucket to read the image and create a thumbnail
  • S3 bucket notification configuration feature can be configured for the event source mapping, to identify the S3 bucket events and the Lambda function to invoke.
  • Error handling for a given event source depends on how Lambda is invoked.
  • S3 invokes your Lambda function asynchronously.

AWS Lambda S3

Amazon DynamoDB

  • Lambda functions can be used as triggers for DynamoDB table to take custom actions in response to updates made to the DynamoDB table.
  • Trigger can be created by
    • First enabling Amazon DynamoDB Streams for the table.
    • Lambda then polls the stream and the Lambda function processes any updates published to the stream.
  • DynamoDB is a stream-based event source and with stream based service, the event source mapping is created in Lambda, identifying the stream to poll and which Lambda function to invoke.
  • Error handling for a given event source depends on how Lambda is invoked.

Amazon Kinesis Streams

  • AWS Lambda can be configured to automatically poll the Kinesis stream periodically (once per second) for new records
  • Lambda can then process any new records such as website click streams, financial transactions, social media feeds, IT logs, and location-tracking events.
  • Kinesis Streams is a stream-based event source and with stream based service, the event source mapping is created in Lambda, identifying the stream to poll and which Lambda function to invoke.
  • Error handling for a given event source depends on how Lambda is invoked.

AWS Lambda Kinesis

Amazon Simple Notification Service

  • Simple Notification Service notifications can be process using Lambda
  • When a message is published to an SNS topic, the service can invoke Lambda function by passing the message payload as parameter, which can then process the event
  • Lambda function can be triggered in response to CloudWatch alarms and other AWS services that use Amazon SNS.
  • SNS via topic subscription configuration feature can be used for the event source mapping, to identify the SNS topic and the Lambda function to invoke.
  • Error handling for a given event source depends on how Lambda is invoked.
  • SNS invokes your Lambda function asynchronously.

Amazon Simple Email Service

  • SES can be used to receive messages and can be configured to invoke Lambda function when messages arrive, by passing in the incoming email event as parameter
  • SES using the rule configuration feature can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • SES invokes your Lambda function asynchronously.

Amazon Cognito

  • Cognito Events feature enables Lambda function to run in response to events in Cognito for e.g. Lambda function can be invoked for the Sync Trigger events, that is published each time a dataset is synchronized.
  • Cognito event subscription configuration feature can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • Cognito is configured to invoke a Lambda function synchronously

AWS CloudFormation

  • Lambda function can be specified as a custom resource to execute any custom commands as a part of deploying CloudFormation stacks and can be invoked whenever the stacks are created, updated or deleted.
  • CloudFormation using stack definition can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • CloudFormation invokes the Lambda function asynchronously

Amazon CloudWatch Logs

  • Lambda functions can be used to perform custom analysis on CloudWatch Logs using CloudWatch Logs subscriptions.
  • CloudWatch Logs subscriptions provide access to a real-time feed of log events from CloudWatch Logs and deliver it to the AWS Lambda function for custom processing, analysis, or loading to other systems.
  • CloudWatch Logs using the log subscription configuration can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • CloudWatch Logs invokes the Lambda function asynchronously

Amazon CloudWatch Events

  • CloudWatch Events help respond to state changes in the AWS resources. When the resources change state, they automatically send events into an event stream.
  • Rules that match selected events in the stream can be created to route them to the Lambda function to take action for e.g., Lambda function can be invoked to log the state of an EC2 instance or AutoScaling Group
  • CloudWatch Events by using a rule target definition can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • CloudWatch Events invokes the Lambda function asynchronously

AWS CodeCommit

  • Trigger can be created for an CodeCommit repository so that events in the repository will invoke a Lambda function for e.g., Lambda function can be invoked when a branch or tag is created or when a push is made to an existing branch.
  • CodeCommit by using a repository trigger can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • CodeCommit Events invokes the Lambda function asynchronously

Scheduled Events (powered by Amazon CloudWatch Events)

  • AWS Lambda can be invoke regularly on a scheduled basis using the schedule event capability in CloudWatch Events.
  • CloudWatch Events by using a rule target definition can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • CloudWatch Events invokes the Lambda function asynchronously

AWS Config

  • Lambda functions can be used to evaluate whether the AWS resource configurations comply with custom Config rules.
  • As resources are created, deleted, or changed, AWS Config records these changes and sends the information to the Lambda functions, which can then evaluate the changes and report results to AWS Config. AWS Config can be used to assess overall resource compliance
  • AWS Config by using a rule target definition can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • AWS Config invokes the Lambda function asynchronously

Amazon API Gateway

  • Lambda function can be invoked over HTTPS by defining a custom REST API and endpoint using Amazon API Gateway.
  • Individual API operations, such as GET and PUT, can be mapped to specific Lambda functions. When an HTTPS request to the API endpoint is received, the Amazon API Gateway service invokes the corresponding Lambda function.
  • Error handling for a given event source depends on how Lambda is invoked.
  • Amazon API Gateway is configured to invoke a Lambda function synchronously.

Other Event Sources: Invoking a Lambda Function On Demand

  • Lambda functions can be invoked on demand without the need to preconfigure any event source mapping in this case.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_Lambda_Developer_Guide

AWS Elasticsearch – Certification

AWS Elasticsearch

  • Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS Cloud.
  • Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analytics
  • Elasticsearch provides
    • real-time, distributed search and analytics engine
    • ability to provision all the resources for Elasticsearch cluster and launches the cluster
    • easy to use cluster scaling options
    • provides self-healing clusters, which automatically detects and replaces failed Elasticsearch nodes, reducing the overhead associated with self-managed infrastructures
    • domain snapshots to back up and restore ES domains and replicate domains across AZs
    • data durability
    • enhanced security with IAM access control
    • node monitoring
    • multiple configurations of CPU, memory, and storage capacity, known as instance types
    • storage volumes for the data using EBS volumes
    • Multiple geographical locations for your resources, known as regions and Availability Zones
    • ability to span cluster nodes across two AZs in the same region, known as zone awareness,  for high availability and redundancy
    • dedicated master nodes to improve cluster stability
    • data visualization using the Kibana tool
    • integration with CloudWatch for monitoring ES domain metrics
    • integration with CloudTrail for auditing configuration API calls to ES domains
    • integration with S3, Kinesis, and DynamoDB for loading streaming data
    • ability to handle structured and Unstructured data
    • HTTP Rest APIs
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You need to perform ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first?
    1. AWS Elasticsearch Service (Elasticsearch Service (ES) is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics. Refer link)
    2. AWS RedShift
    3. AWS EMR
    4. AWS DynamoDB
  2. You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?
    1. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues.
    2. Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc MapReduce analysis and write new queries when needed.
    3. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.
    4. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (AWS Elasticsearch with Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)

AWS Cloud Migration Services – Certification

AWS Cloud Migration Services

  • AWS Cloud Migration services help to address a lot of common use cases such as
    • cloud migration,
    • disaster recovery,
    • data center decommission, and
    • content distribution.
  • For migrating data from On Premises to AWS, the major aspect for considerations are
    • amount of data and network speed
    • data security in transit
    • existing application knowledge for recreation

NOTE: Topic mainly for Professional Exam Only

VPN

  • connection utilizes IPSec to establish encrypted network connectivity between on-premises network and VPC over the Internet.
  • connections can be configured in minutes and a good solution for an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
  • still requires internet and be configured using VGW and CGW

AWS EC2 VM Import/Export

  • allows easy import of virtual machine images from existing environment to EC2 instances and export them back to on-premises environment
  • allows leveraging of existing investments in the virtual machines, built to meet compliance requirements, configuration management and IT security by bringing those virtual machines into EC2 as ready-to-use instances
  • Common usages include
    • Migrate Existing Applications and Workloads to EC2, allows to preserve software and settings that configured in the existing VMs
    • Copy Your VM Image Catalog to Amazon EC2
    • Create a Disaster Recovery Repository for your VM images

AWS Direct Connect

  • provides a dedicated physical connection between the corporate network and AWS Direct Connect location with no data transfer over the Internet.
  • helps bypass Internet service providers (ISPs) in the network path
  • helps reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than with Internet-based connection
  • takes time to setup and involves third parties
  • are not redundant and would need another direct connect connection or a VPN connection
  •  Security
    • provides a dedicated physical connection without internet
    • For additional security can be used with VPN

AWS Import/Export (upgraded to Snowball)

  • accelerates moving large amounts of data into and out of AWS using secure Snowball appliances
  • AWS transfers the data directly onto and off of the storage devices using Amazon’s high-speed internal network, bypassing the Internet
  • Data Migration
    • for significant data size, AWS Import/Export is faster than Internet transfer is and more cost-effective than upgrading the connectivity
    • if loading the data over the Internet would take a week or more, AWS Import/Export should be considered
    • data from appliances can be imported to S3, Glacier and EBS volumes and exported from S3
    • not suitable for applications that cannot tolerate offline transfer time
  •  Security
    • Snowball uses an industry-standard Trusted Platform Module (TPM) that has a dedicated processor designed to detect any unauthorized modifications to the hardware, firmware, or software to physically secure the AWS Snowball device.

AWS Storage Gateway

  • connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and the AWS storage infrastructure
  • provides low-latency performance by maintaining frequently accessed data on-premises while securely storing all of the data encrypted in S3 or Glacier.
  • for disaster recovery scenarios, Storage Gateway, together with EC2, can serve as a cloud-hosted solution that mirrors the entire production environment
  • Data Migration
    • with gateway-cached volumes, S3 can be used to hold primary data while frequently accessed data is cached locally for faster access reducing the need to scale on premises storage infrastructure
    • with gateway-stored volumes, entire data is stored locally while asynchronously backing up data to S3
    • with gateway-VTL, offline data archiving can be performed by presenting existing backup application with an iSCSI-based VTL consisting of a virtual media changer and virtual tape drives
  •  Security
    • Encrypts all data in transit to and from AWS by using SSL/TLS.
    • All data in AWS Storage Gateway is encrypted at rest using AES-256.
    • Authentication between the gateway and iSCSI initiators can be secured by using Challenge-Handshake Authentication Protocol (CHAP).

S3

  • Data Transfer
    • Files up to 5GB can be transferred using single operation
    • Multipart uploads can be used to upload files up to 5 TB and speed up data uploads by dividing the file into multiple parts
    • transfer rate still limited by the network speed
  •  Security
    • Data in transit can be secured by using SSL/TLS or client-side encryption.
    • Encrypt data at-rest by performing server-side encryption using Amazon S3-Managed Keys (SSE-S3), AWS Key Management Service (KMS)-Managed Keys (SSE-KMS), or Customer Provided Keys (SSE-C). Or by performing client-side encryption using AWS KMS–Managed Customer Master Key (CMK) or Client-Side Master Key.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your must architect the migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location. What options could you select to migrate the application to AWS? (Choose 2)
    1. Create an AWS Elastic Beanstalk application using the custom web server platform. Specify the web server executable and the application project and source files. Enable log file rotation to Amazon Simple Storage Service (S3). (EB does not work with Custom server executable)
    2. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a custom layer. Create custom recipes to install Docker and to deploy your Docker container using the Dockerfile. Create custom recipes to install and configure the application to publish the logs to Amazon CloudWatch Logs (although this is one of the option, the last sentence mentions configure the application to push the logs to S3, which would need changes to application as it needs to use SDK or CLI)
    3. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a Docker layer that uses the Dockerfile. Create custom recipes to install and configure Amazon Kinesis to publish the logs into Amazon CloudWatch. (Kinesis not needed)
    4. Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Use Docker configuration with awslogs and EB with Docker)
    5. Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI. (Use VM Import/Export to create AMI and CloudWatch logs agent to log)
  2. Your company hosts an on-premises legacy engineering application with 900GB of data shared via a central file server. The engineering data consists of thousands of individual files ranging in size from megabytes to multiple gigabytes. Engineers typically modify 5-10 percent of the files a day. Your CTO would like to migrate this application to AWS, but only if the application can be migrated over the weekend to minimize user downtime. You calculate that it will take a minimum of 48 hours to transfer 900GB of data using your company’s existing 45-Mbps Internet connection. After replicating the application’s environment in AWS, which option will allow you to move the application’s data to AWS without losing any data and within the given timeframe?
    1. Copy the data to Amazon S3 using multiple threads and multi-part upload for large files over the weekend, and work in parallel with your developers to reconfigure the replicated application environment to leverage Amazon S3 to serve the engineering files. (Still limited by 45 Mbps speed with minimum 48 hours when utilized to max)
    2. Sync the application data to Amazon S3 starting a week before the migration, on Friday morning perform a final sync, and copy the entire data set to your AWS file server after the sync completes. (Works best as the data changes can be propagated over the week and are fractional and downtime would be know)
    3. Copy the application data to a 1-TB USB drive on Friday and immediately send overnight, with Saturday delivery, the USB drive to AWS Import/Export to be imported as an EBS volume, mount the resulting EBS volume to your AWS file server on Sunday. (Downtime is not known when the data upload would be done, although Amazon says the same day the package is received)
    4. Leverage the AWS Storage Gateway to create a Gateway-Stored volume. On Friday copy the application data to the Storage Gateway volume. After the data has been copied, perform a snapshot of the volume and restore the volume as an EBS volume to be attached to your AWS file server on Sunday. (Still uses the internet)
  3. You are tasked with moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there’s no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose 3 answers)
    1. An AWS Direct Connect link between the VPC and the network housing the internal services
    2. An Internet Gateway to allow a VPN connection. (Virtual and Customer gateway is needed)
    3. An Elastic IP address on the VPC instance
    4. An IP address space that does not conflict with the one on-premises
    5. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies’ IP addresses
    6. A VM Import of the current virtual machine

References

AWS Automated Backups – Certification

AWS Automated Backups

  • AWS allows automated backups for
    • RDS
    • ElastiCache – Redis only
    • Redshift
  • AWS does not perform automated backups for EC2 EBS volumes and needs to be manually scripted
  • AWS stores the backups and snapshots in S3

RDS Backups

  • RDS supports automated backups as well as manual snapshots
  • Automated Backups
    • enable point-in-time recovery of the DB Instance
    • perform a full daily backup and captures transaction logs (as updates to your DB instance are made
    • are performed during the defined preferred backup window and is retained for user-specified period of time called the retention period (default 1 day with a max of 35 days)
    • When a point-in-time recovery is initiated, transaction logs are applied to the most appropriate daily backup in order to restore the DB instance to the specific requested time.
    • allows a point-in-time restore and an ability to specify any second during the retention period, up to the Latest Restorable Time
    • are deleted when the DB instance is deleted
  • Snapshots
    • are user-initiated and enable to back up the DB instance in a known state as frequently as needed, and then restored to that specific state at any time.
    • can be created with the AWS Management Console or by using the CreateDBSnapshot API call.
    • are not deleted when the DB instance is deleted
  • Automated backups and snapshots can result in a performance hit, if Multi-AZ is not enabled

ElastiCache Automated Backups

  • ElastiCache supports Automated backups for Redis cluster only
  • ElastiCache creates a backup of the cluster on a daily basis
  • Snapshot will degrade performance, so should be performed during least bust part of the day
  • Backups are performed during the Backup period and retained for backup retention limit defined, with a maximum of 35 days
  • ElastiCache also allows manual snapshots of the cluster

Redshift Automated Backups

  • Amazon Redshift enables automated backups, by default
  • Redshift replicates all the data within your data warehouse cluster when it is loaded and also continuously backs up the data to S3
  • Redshift retains backups for 1 day which can be extended to max 35 days
  • Redshift only backs up data that has changed and are incremental so most snapshots use up a small amount of storage
  • Redshift also allows manual snapshots of the data warehouse

EC2 EBS Backups

  • EBS does not provide automated backups
  • EBS snapshots can be created by using the AWS Management Console, the command line interface (CLI), or the APIs
  • Backups degrade performance
  • Stored on S3
  • EBS Snapshots are incremental and block-based, and they consume space only for changed data after the initial snapshot is created
  • Data can be restored from snapshots by created a volume from the snapshot
  • EBS snapshots are region specific and can be copied between AWS regions

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which two AWS services provide out-of-the-box user configurable automatic backup-as-a-service and backup rotation options? Choose 2 answers
    1. Amazon S3
    2. Amazon RDS
    3. Amazon EBS
    4. Amazon Redshift
  2. You have been asked to automate many routine systems administrator backup and recovery activities. Your current plan is to leverage AWS-managed solutions as much as possible and automate the rest with the AWS CLI and scripts. Which task would be best accomplished with a script?
    1. Creating daily EBS snapshots with a monthly rotation of snapshots
    2. Creating daily RDS snapshots with a monthly rotation of snapshots
    3. Automatically detect and stop unused or underutilized EC2 instances
    4. Automatically add Auto Scaled EC2 instances to an Amazon Elastic Load Balancer

AWS Billing and Cost Management – Certification

AWS Billing and Cost Management

  • AWS Billing and Cost Management is the service that you use to pay AWS bill, monitor your usage, and budget your costs

Analyzing Costs with Graphs

  • AWS provides Cost Explorer tool which allows filter graphs by API operations, Availability Zones, AWS service, custom cost allocation tags, EC2 instance type, purchase options, region, usage type, usage type groups, or, if Consolidated Billing used, by linked account.

Budgets

  • Budgets can be used to track AWS costs to see usage-to-date and current estimated charges from AWS
  • Budgets use the cost visualization provided by Cost Explorer to show the status of the budgets and to provide forecasts of your estimated costs.
  • Budgets can be used to create CloudWatch alarms that notify when you go over your budgeted amounts, or when the estimated costs exceed budgets
  • Notifications can be sent to an SNS topic and to email addresses associated with your budget notification

Cost Allocation Tags

  • Tags can be used to organize AWS resources, and cost allocation tags to track the AWS costs on a detailed level.
  • Upon cost allocation tags activation, AWS uses the cost allocation tags to organize the resource costs on the cost allocation report making it easier to categorize and track your AWS costs.
  • AWS provides two types of cost allocation tags,
    • an AWS-generated tag AWS defines, creates, and applies the AWS-generated tag for you,
    • and user-defined tags that you define, create,
  • Both types of tags must be activated separately before they can appear in Cost Explorer or on a cost allocation report

Alerts on Cost Limits

  • CloudWatch can be used to create billing alerts when the AWS costs exceed specified thresholds
  • When the usage exceeds threshold amounts, AWS sends an email notification

Consolidated Billing

Refer to My Blog Post about Consolidated Billing

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An organization is using AWS since a few months. The finance team wants to visualize the pattern of AWS spending. Which of the below AWS tool will help for this requirement?
    • AWS Cost Manager
    • AWS Cost Explorer (Check Cost Explorer)
    • AWS CloudWatch
    • AWS Consolidated Billing (Will not help visualize)
  2. Your company wants to understand where cost is coming from in the company’s production AWS account. There are a number of applications and services running at any given time. Without expending too much initial development time, how best can you give the business a good understanding of which applications cost the most per month to operate?
    1. Create an automation script, which periodically creates AWS Support tickets requesting detailed intra-month information about your bill.
    2. Use custom CloudWatch Metrics in your system, and put a metric data point whenever cost is incurred.
    3. Use AWS Cost Allocation Tagging for all resources, which support it. Use the Cost Explorer to analyze costs throughout the month. (Refer link)
    4. Use the AWS Price API and constantly running resource inventory scripts to calculate total price based on multiplication of consumed resources over time.
  3. You need to know when you spend $1000 or more on AWS. What’s the easy way for you to see that notification?
    1. AWS CloudWatch Events tied to API calls, when certain thresholds are exceeded, publish to SNS.
    2. Scrape the billing page periodically and pump into Kinesis.
    3. AWS CloudWatch Metrics + Billing Alarm + Lambda event subscription. When a threshold is exceeded, email the manager.
    4. Scrape the billing page periodically and publish to SNS.
  4. A user is planning to use AWS services for his web application. If the user is trying to set up his own billing management system for AWS, how can he configure it?
    1. Set up programmatic billing access. Download and parse the bill as per the requirement
    2. It is not possible for the user to create his own billing management service with AWS
    3. Enable the AWS CloudWatch alarm which will provide APIs to download the alarm data
    4. Use AWS billing APIs to download the usage report of each service from the AWS billing console
  5. An organization is setting up programmatic billing access for their AWS account. Which of the below mentioned services is not required or enabled when the organization wants to use programmatic access?
    1. Programmatic access
    2. AWS bucket to hold the billing report
    3. AWS billing alerts
    4. Monthly Billing report
  6. A user has setup a billing alarm using CloudWatch for $200. The usage of AWS exceeded $200 after some days. The user wants to increase the limit from $200 to $400? What should the user do?
    1. Create a new alarm of $400 and link it with the first alarm
    2. It is not possible to modify the alarm once it has crossed the usage limit
    3. Update the alarm to set the limit at $400 instead of $200 (Refer link)
    4. Create a new alarm for the additional $200 amount
  7. A user is trying to configure the CloudWatch billing alarm. Which of the below mentioned steps should be performed by the user for the first time alarm creation in the AWS Account Management section?
    1. Enable Receiving Billing Reports
    2. Enable Receiving Billing Alerts
    3. Enable AWS billing utility
    4. Enable CloudWatch Billing Threshold

References

AWS_Billing_&_Cost_Management – User_Guide

AWS RDS Monitoring & Notification – Certification

AWS RDS Monitoring & Notification

  • RDS integrates with CloudWatch and provides metrics for monitoring
  • CloudWatch alarms can be created over a single metric that sends an SNS message when the alarm changes state
  • RDS also provides SNS notification whenever any RDS event occurs

CloudWatch RDS Monitoring

  • RDS DB instance can be monitored using CloudWatch, which collects and processes raw data from RDS into readable, near real-time metrics.
  • The statistics are recorded for a period of two weeks, so that you can access historical information and gain a better perspective on how the service is performing.
  • By default, RDS metric data is automatically sent to Amazon CloudWatch in 1-minute periods
  • CloudWatch RDS Metrics
    • BinLogDiskUsage – Amount of disk space occupied by binary logs on the master. Applies to MySQL read replicas.
    • CPUUtilization – Percentage of CPU utilization.
    • DatabaseConnections – Number of database connections in use.
    • DiskQueueDepth – The number of outstanding IOs (read/write requests) waiting to access the disk.
    • FreeableMemory – Amount of available random access memory.
    • FreeStorageSpace – Amount of available storage space.
    • ReplicaLag – Amount of time a Read Replica DB instance lags behind the source DB instance. Applies to MySQL, MariaDB, and PostgreSQL Read Replicas.
    • SwapUsage – Amount of swap space used on the DB instance.
    • ReadIOPS – Average number of disk I/O operations per second.
    • WriteIOPS – Average number of disk I/O operations per second.
    • ReadLatency – Average amount of time taken per disk I/O operation.
    • WriteLatency – Average amount of time taken per disk I/O operation.
    • ReadThroughput – Average number of bytes read from disk per second.
    • WriteThroughput – Average number of bytes written to disk per second.
    • NetworkReceiveThroughput – Incoming (Receive) network traffic on the DB instance, including both customer database traffic and Amazon RDS traffic used for monitoring and replication.
    • NetworkTransmitThroughput – Outgoing (Transmit) network traffic on the DB instance, including both customer database traffic and Amazon RDS traffic used for monitoring and replication.

RDS Event Notification

  • RDS uses the SNS to provide notification when an RDS event occurs
  • RDS groups the events into categories, which can be subscribed so that a notification is sent when an event in that category occurs.
  • Event category for a DB instance, DB cluster, DB snapshot, DB cluster snapshot, DB security group or for a DB parameter group can be subscribed
  • Event notifications are sent to the email addresses provided during subscription creation
  • Subscription can be easily turn off notification without deleting a subscription by setting the Enabled radio button to No in the RDS console or by setting the Enabled parameter to false using the CLI or RDS API.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You run a web application with the following components Elastic Load Balancer (ELB), 3 Web/Application servers, 1 MySQL RDS database with read replicas, and Amazon Simple Storage Service (Amazon S3) for static content. Average response time for users is increasing slowly. What three CloudWatch RDS metrics will allow you to identify if the database is the bottleneck? Choose 3 answers
    1. The number of outstanding IOs waiting to access the disk
    2. The amount of write latency
    3. The amount of disk space occupied by binary logs on the master.
    4. The amount of time a Read Replica DB Instance lags behind the source DB Instance
    5. The average number of disk I/O operations per second.
  2. Typically, you want your application to check whether a request generated an error before you spend any time processing results. The easiest way to find out if an error occurred is to look for an __________ node in the response from the Amazon RDS API.
    1. Incorrect
    2. Error
    3. FALSE
  3. In the Amazon CloudWatch, which metric should I be checking to ensure that your DB Instance has enough free storage space?
    1. FreeStorage
    2. FreeStorageSpace
    3. FreeStorageVolume
    4. FreeDBStorageSpace
  4. A user is receiving a notification from the RDS DB whenever there is a change in the DB security group. The user does not want to receive these notifications for only a month. Thus, he does not want to delete the notification. How can the user configure this?
    1. Change the Disable button for notification to “Yes” in the RDS console
    2. Set the send mail flag to false in the DB event notification console
    3. The only option is to delete the notification from the console
    4. Change the Enable button for notification to “No” in the RDS console
  5. A sys admin is planning to subscribe to the RDS event notifications. For which of the below mentioned source categories the subscription cannot be configured?
    1. DB security group
    2. DB snapshot
    3. DB options group
    4. DB parameter group
  6. A user is planning to setup notifications on the RDS DB for a snapshot. Which of the below mentioned event categories is not supported by RDS for this snapshot source type?
    1. Backup (Refer link)
    2. Creation
    3. Deletion
    4. Restoration
  7. A system admin is planning to setup event notifications on RDS. Which of the below mentioned services will help the admin setup notifications?
    1. AWS SES
    2. AWS Cloudtrail
    3. AWS CloudWatch
    4. AWS SNS
  8. A user has setup an RDS DB with Oracle. The user wants to get notifications when someone modifies the security group of that DB. How can the user configure that?
    1. It is not possible to get the notifications on a change in the security group
    2. Configure SNS to monitor security group changes
    3. Configure event notification on the DB security group
    4. Configure the CloudWatch alarm on the DB for a change in the security group
  9. It is advised that you watch the Amazon CloudWatch “_____” metric (available via the AWS Management Console or Amazon Cloud Watch APIs) carefully and recreate the Read Replica should it fall behind due to replication errors.
    1. Write Lag
    2. Read Replica
    3. Replica Lag
    4. Single Replica