Architecting for the Cloud – AWS Best Practices – Whitepaper – Certification

Architecting for the Cloud – AWS Best Practices

Architecting for the Cloud – AWS Best Practices whitepaper provides architectural patterns and advice on how to design systems that are secure, reliable, high performing, and cost efficient

AWS Design Principles

Scalability

  • While AWS provides virtually unlimited on-demand capacity, the architecture should be designed to take advantage of those resources
  • There are two ways to scale an IT architecture
    • Vertical Scaling
      • takes place through increasing specifications of an individual resource for e.g. updating EC2 instance type with increasing RAM, CPU, IOPS, or networking capabilities
      • will eventually hit a limit, and is not always a cost effective or highly available approach
    • Horizontal Scaling
      • takes place through increasing number of resources for e.g. adding more EC2 instances or EBS volumes
      • can help leverage the elasticity of cloud computing
      • not all the architectures can be designed to distribute their workload to multiple resources
      • applications designed should be stateless,
        • that needs no knowledge of previous interactions and stores no session information
        • capacity can be increased and decreased, after running tasks have been drained
      • State, if needed, can be implemented using
        • Low latency external store, for e.g. DynamoDB, Redis, to maintain state information
        • Session affinity, for e.g. ELB sticky sessions, to bind all the transactions of a session to a specific compute resource. However, it cannot be guaranteed or take advantage of newly added resources for existing sessions
      • Load can be distributed across multiple resources using
        • Push model, for e.g. through ELB where it distributes the load across multiple EC2 instances
        • Pull model, for e.g. through SQS or Kinesis where multiple consumers subscribe and consume
      • Distributed processing, for e.g. using EMR or Kinesis, helps process large amounts of data by dividing task and its data into many small fragments of works

Disposable Resources Instead of Fixed Servers

  • Resources need to be treated as temporary disposable resources rather then fixed permanent on-premises resources before
  • AWS focuses on the concept of Immutable infrastructure
    • servers once launched, is never updated throughout its lifetime.
    • updates can be performed on a new server with latest configurations,
    • this ensures resources are always in a consistent (and tested) state and easier rollbacks
  • AWS provides multiple ways to instantiate compute resources in an automated and repeatable way
    • Bootstraping
      • scripts to configure and setup for e.g. using data scripts and cloud-init to install software or copy resources and code
    • Golden Images
      • a snapshot of a particular state of that resource,
      • faster start times and removes dependencies to configuration services or third-party repositories
    • Containers
      • AWS support for docker images through Elastic Beanstalk and ECS
      • Docker allows packaging a piece of software in a Docker Image, which is a standardized unit for software development, containing everything the software needs to run: code, runtime, system tools, system libraries, etc
  • Infrastructure as Code
    • AWS assets are programmable, techniques, practices, and tools from software development can be applied to make the whole infrastructure reusable, maintainable, extensible, and testable.
    • AWS provides services like CloudFormation, OpsWorks for deployment

Automation

  • AWS provides various automation tools and services which help improve system’s stability, efficiency and time to market.
    • Elastic Beanstalk
      • a PaaS that allows quick application deployment while handling resource provisioning, load balancing, auto scaling, monitoring etc
    • EC2 Auto Recovery
      • creates CloudWatch alarm that monitors an EC2 instance and automatically recovers it if it becomes impaired.
      • A recovered instance is identical to the original instance, including the instance ID, private & Elastic IP addresses, and all instance metadata.
      • Instance is migrated through reboot, in memory contents are lost.
    • Auto Scaling
      • allows maintain application availability and scale the capacity up or down automatically as per defined conditions
    • CloudWatch Alarms
      • allows SNS triggers to be configured when a particular metric goes beyond a specified threshold for a specified number of periods
    • CloudWatch Events
      • allows real-time stream of system events that describe changes in AWS resources
    • OpsWorks
      • allows continuous configuration through lifecycle events that automatically update the instances’ configuration to adapt to environment changes.
      • Events can be used to trigger Chef recipes on each instance to perform specific configuration tasks
    • Lambda Scheduled Events
      • allows Lambda function creation and direct AWS Lambda to execute it on a regular schedule.

Loose Coupling

  • AWS helps loose coupled architecture that reduces interdependencies, a change or failure in a component does not cascade to other components
    • Asynchronous Integration
      • does not involve direct point-to-point interaction but usually through an intermediate durable storage layer for e.g. SQS, Kinesis
      • decouples the components and introduces additional resiliency
      • suitable for any interaction that doesn’t need an immediate response and where an ack that a request has been registered will suffice
    • Service Discovery
      • allows new resources to be launched or terminated at any point in time and discovered as well for e.g. using ELB as a single point of contact with hiding the underlying instance details or Route 53 zones to abstract load balancer’s endpoint
    • Well-Defined Interfaces
      • allows various components to interact with each other through specific, technology agnostic interfaces for e.g. RESTful apis with API Gateway 

Services, Not Servers

Databases

  • AWS provides different categories of database technologies
    • Relational Databases (RDS)
      • normalizes data into well-defined tabular structures known as tables, which consist of rows and columns
      • provide a powerful query language, flexible indexing capabilities, strong integrity controls, and the ability to combine data from multiple tables in a fast and efficient manner
      • allows vertical scalability by increasing resources and horizontal scalability using Read Replicas for read capacity and sharding or data partitioning for write capacity
      • provides High Availability using Multi-AZ deployment, where data is synchronously replicated
    • NoSQL Databases (DynamoDB)
      • provides databases that trade some of the query and transaction capabilities of relational databases for a more flexible data model that seamlessly scales horizontally
      • perform data partitioning and replication to scale both the reads and writes in a horizontal fashion
      • DynamoDB service synchronously replicates data across three facilities in an AWS region to provide fault tolerance in the event of a server failure or Availability Zone disruption
    • Data Warehouse (Redshift)
      • Specialized type of relational database, optimized for analysis and reporting of large amounts of data
      • Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing (MPP), columnar data storage, and targeted data compression encoding schemes
      • Redshift MPP architecture enables increasing performance by increasing the number of nodes in the data warehouse cluster
  • For more details refer to AWS Storage Options Whitepaper

Removing Single Points of Failure

  • AWS provides ways to implement redundancy, automate recovery and reduce disruption at every layer of the architecture
  • AWS supports redundancy in the following ways
    • Standby Redundancy
      • When a resource fails, functionality is recovered on a secondary resource using a process called failover.
      • Failover will typically require some time before it completes, and during that period the resource remains unavailable.
      • Secondary resource can either be launched automatically only when needed (to reduce cost), or it can be already running idle (to accelerate failover and minimize disruption).
      • Standby redundancy is often used for stateful components such as relational databases.
    • Active Redundancy
      • requests are distributed to multiple redundant compute resources, if one fails, the rest can simply absorb a larger share of the workload.
      • Compared to standby redundancy, it can achieve better utilization and affect a smaller population when there is a failure.
  • AWS supports replication
    • Synchronous replication
      • acknowledges a transaction after it has been durably stored in both the primary location and its replicas.
      • protects data integrity from the event of a primary node failure
      • used to scale read capacity for queries that require the most up-to-date data (strong consistency).
      • compromises performance and availability
    • Asynchronous replication
      • decouples the primary node from its replicas at the expense of introducing replication lag
      • used to horizontally scale the system’s read capacity for queries that can tolerate that replication lag.
    • Quorum-based replication
      • combines synchronous and asynchronous replication to overcome the challenges of large-scale distributed database systems
      • Replication to multiple nodes can be managed by defining a minimum number of nodes that must participate in a successful write operation
  • AWS provide services to reduce or remove single point of failure
    • Regions, Availability Zones with multiple data centers
    • ELB or Route 53 to configure health checks and mask failure by routing traffic to healthy endpoints
    • Auto Scaling to automatically replace unhealthy nodes
    • EC2 auto-recovery to recover unhealthy impaired nodes
    • S3, DynamoDB with data redundantly stored across multiple facilities
    • Multi-AZ RDS and Read Replicas
    • ElastiCache Redis engine supports replication with automatic failover
  • For more details refer to AWS Disaster Recovery Whitepaper

Optimize for Cost

  • AWS can help organizations reduce capital expenses and drive savings as a result of the AWS economies of scale
  • AWS provides different options which should be utilized as per use case –
    • EC2 instance types – On Demand, Reserved and Spot
    • Trusted Advisor or EC2 usage reports to identify the compute resources and their usage
    • S3 storage class – Standard, Reduced Redundancy, and Standard-Infrequent Access
    • EBS volumes – Magnetic, General Purpose SSD, Provisioned IOPS SSD
    • Cost Allocation tags to identify costs based on tags
    • Auto Scaling to horizontally scale the capacity up or down based on demand
    • Lambda based architectures to never pay for idle or redundant resources
    • Utilize managed services where scaling is handled by AWS for e.g. ELB, CloudFront, Kinesis, SQS, CloudSearch etc.

Caching

  • Caching improves application performance and increases the cost efficiency of an implementation
    • Application Data Caching
      • provides services thats helps store and retrieve information from fast, managed, in-memory caches
      • ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud and supports two open-source in-memory caching engines: Memcached and Redis
    • Edge Caching
      • allows content to be served by infrastructure that is closer to viewers, lowering latency and giving high, sustained data transfer rates needed to deliver large popular objects to end users at scale.
      • CloudFront is Content Delivery Network (CDN) consisting of multiple edge locations, that allows copies of static and dynamic content to be cached

Security

  • AWS works on shared security responsibility model
    • AWS is responsible for the security of the underlying cloud infrastructure
    • you are responsible for securing the workloads you deploy in AWS
  • AWS also provides ample security features
    • IAM to define a granular set of policies and assign them to users, groups, and AWS resources
    • IAM roles to assign short term credentials to resources, which are automatically distributed and rotated
    • Amazon Cognito, for mobile applications, which allows client devices to get controlled access to AWS resources via temporary tokens.
    • VPC to isolate parts of infrastructure through the use of subnets, security groups, and routing controls
    • WAF to help protect web applications from SQL injection and other vulnerabilities in the application code
    • CloudWatch logs to collect logs centrally as the servers are temporary
    • CloudTrail for auditing AWS API calls, which delivers a log file to S3 bucket. Logs can then be stored in an immutable manner and automatically processed to either notify or even take action on your behalf, protecting your organization from non-compliance
    • AWS Config, Amazon Inspector, and AWS Trusted Advisor to continually monitor for compliance or vulnerabilities giving a clear overview of which IT resources are in compliance, and which are not
  • For more details refer to AWS Security Whitepaper

References

Architecting for the Cloud: AWS Best Practices – Whitepaper – 2016

 

AWS Pricing – Whitepaper – Certification

AWS Pricing Whitepaper Overview

AWS pricing features include

  • Pay as you go
    • No minimum contracts/commitments or long-term contracts required
    • Pay only for services you use that can be stopped when not needed
    • Each service is charged independently, providing flexibility to choose services as needed
  • Pay less when you reserve
    • some services like EC2 provide reserved capacity, which provide significantly discounted rate and increase in overall savings
  • Pay even less by using more
    • some services like storage and data services, the more the usage the less you pay per gigabyte
    • consolidated billing to consolidate multiple accounts and get tiering benefits
  • Pay even less as AWS grows
    • AWS works continuously to reduce costs by reducing data center hardware costs, improving operational efficiencies, lowering power consumption, and generally lowering the cost of doing business
  • Free services
    • AWS offers lot of services free like AWS VPC, Elastic Beanstalk, CloudFormation, IAM, Auto Scaling, OpsWorks, Consolidated Billing
  • Other features
    • AWS Free Tier for new customers, which offer free usage of services within permissible limits

AWS Pricing Resources

  • AWS Simple Monthly Calculator tool to effectively estimate the costs, which provides per service cost breakdown, as well as an aggregate monthly estimate.
  • AWS Economic Center provides access to information, tools, and resources to compare the costs of AWS services with IT infrastructure alternatives.
  • AWS Account Activity to view current charges and account activity, itemized by service and by usage type. Previous months’ billing statements are also available.
  • AWS Usage Reports provides usage reports, specifying usage types, timeframe, service operations, and more can customize reports.

AWS Pricing Fundamental Characteristics

  • AWS basically charges for
    • Compute,
    • Storage and
    • Data Transfer Out – aggregated across EC2, S3, RDS, SimpleDB, SQS, SNS, and VPC and then charged at the outbound data transfer rate
  • AWS does not charge
    • Inbound data transfer across all AWS Services in all regions
    • Outbound data transfer charges between AWS Services within the same region

AWS Elastic Cloud Compute – EC2

EC2 provides resizable compute capacity in cloud and the cost depends on –

  • Clock Hours of Server Time
    • Resources are charged for the time they are running
    • AWS updated the EC2 billing from hourly basis to Per Second Billing (Circa Oct. 2017). It takes cost of unused minutes and seconds in an hour off of the bill, so the focus is on improving the applications instead of maximizing usage to the hour
  • Machine Configuration
    • Depends on the physical capacity and Instance pricing varies with the AWS region, OS, number of cores, and memory
  • Machine Purchase Type
    • On Demand instances – pay for compute capacity with no required minimum commitments
    • Reserved Instances – option to make a low one-time payment – or no payment at all – for each reserved instance and in turn receive a significant discount on the usage
    • Spot Instances – bid for unused EC2 capacity
  • Auto Scaling & Number of Instances
    • Auto Scaling automatically adjusts the number of EC2 instances
  • Load Balancing
    • ELB can be used to distribute traffic among EC2 instances.
    • Number of hours the ELB runs and the amount of data it processes contribute to the monthly cost.
  • CloudWatch Detailed Monitoring
    • Basic monitoring is enabled and available at no additional cost
    • Detailed monitoring, which includes seven preselected metrics recorded once a minute, can be availed for a fixed monthly rate
    • Partial months are charged on an hourly pro rata basis, at a per instance-hour rate
  • Elastic IP Addresses
    • Elastic IP addresses are charged only when are not associated with an instance
  • Operating Systems and Software Packages
    • OS prices are included in the instance prices. There are no additional licensing costs to run the following commercial OS: RHEL, SUSE Enterprise Linux,  Windows Server and Oracle Enterprise Linux
    • For unsupported commercial software packages, license needs to be obtained

AWS Simple Storage Service – S3

S3 provides object storage and the cost depends on

  • Storage Class
    • Each storage class has different rates and provide different capabilities
    • Standard Storage is designed to provide 99.999999999% durability and 99.99% availability.
    • Standard – Infrequent Access (SIA) is a storage option within S3 that you can use to reduce your costs by storing  than Amazon S3’s standard storage.
    • Standard – Infrequent Access for storing less frequently accessed data at slightly lower levels of redundancy, is designed to provide the same 99.999999999% durability as S3 with 99.9% availability in a given year.
  • Storage
    • Number and size of objects stored in the S3 buckets as well as type of storage.
  • Requests
    • Number and type of requests. GET requests incur charges at different rates than other requests, such as PUT and COPY requests.
  • Data Transfer Out
    • Amount of data transferred out of the S3 region.

AWS Elastic Block Store – EBS

EBS provides block level storage volumes and the cost depends on

  • Volumes
    • EBS provides three volume types: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic, charged by the amount provisioned in GB per month, until its released
  • Input Output Operations per Second (IOPS)
    • With General Purpose (SSD) volumes, I/O is included in the price
    • With EBS Magnetic volumes, I/O is charged by the number of requests made to the volume
    • With Provisioned IOPS (SSD) volumes, I/O is charged by the amount of provisioned, multiplied by the % of days provisioned for the month
  • Data Transfer Out
    • Amount of data transferred out of the application and outbound data transfer charges are tiered.
  • Snapshot
    • Snapshots of data to S3 are created for durable recovery. If opted for EBS snapshots, the added cost is per GB-month of data stored.

AWS Relational Database Service – RDS

RDS provides an easy to set up, operate, and scale a relational database in the cloud and the cost depends on

  • Clock Hours of Server Time
    • Resources are charged for the time they are running, from the time a DB instance is launched until terminated
  • Database Characteristics
    • Depends on the physical capacity and Instance pricing varies with the database engine, size, and memory class.
  • Database Purchase Type
    • On Demand instances – pay for compute capacity for each hour the DB Instance runs with no required minimum commitments
    • Reserved Instances – option to make a low, one-time, up-front payment for each DB Instance to reserve for a 1-year or 3-year term and in turn receive a significant discount on the usage
  • Number of Database Instances
    • multiple DB instances can be provisioned to handle peak loads
  • Provisioned Storage
    • Backup storage of up to 100% of a provisioned database storage for an active DB Instance is not charged
    • After the DB Instance is terminated, backup storage is billed per gigabyte per month.
  • Additional Storage
    • Amount of backup storage in addition to the provisioned storage amount is billed per gigabyte per month.
  • Requests
    • Number of input and output requests to the database.
  • Deployment Type
    • Storage and I/O charges vary, depending on the number of AZs the RDS is deployed – Single AZ or Multi-AZ
  • Data Transfer Out
    • Outbound data transfer costs are tiered.
    • Inbound data transfer is free

AWS CloudFront

CloudFront is a web service for content delivery and an easy way to distribute content to end users with low latency, high data transfer speeds, and no required minimum commitments.

  • Traffic Distribution
    • Data transfer and request pricing vary across geographic regions, and pricing is based on edge location through which the content is served
  • Requests
    • Number and type of requests (HTTP or HTTPS) made and the geographic region in which the requests are made.
  • Data Transfer Out
    • Amount of data transferred out of the CloudFront edge locations

References

AWS Pricing Whitepaper – 2016

 

 

AWS SQS – Standard vs FIFO Queue – Certification

AWS SQS – Standard vs FIFO Queue

SQS offers two types of queues – Standard & FIFO queues

SQS Standard vs FIFO Queue

Message Order

  • Standard queues provide best-effort ordering which ensures that messages are generally delivered in the same order as they are sent. occasionally (because of the highly-distributed architecture that allows high throughput), more than one copy of a message might be delivered out of order
  • FIFO queues offer first-in-first-out delivery and exactly-once processing: the order in which messages are sent and received is strictly preserved

Delivery

  • Standard queues guarantee that a message is delivered at least once and duplicates can be introduced into the queue
  • FIFO queues ensure a message is delivered exactly once and remains available until a consumer processes and deletes it; duplicates are not introduced into the queue

Transactions Per Second (TPS)

  • Standard queues allow nearly-unlimited number of transactions per second
  • FIFO queues are limited to 300 transactions per second per API action

Regions

  • Standard queues are available in all the regions
  • FIFO queues are currently available in limited regions including the US West (Oregon), US East (Ohio), US East (N. Virginia), and EU (Ireland)

SQS Buffered Asynchronous Client

  • FIFO queues aren’t currently compatible with the SQS Buffered Asynchronous Client, where messages are buffered at client side and send as a single request to the SQS queue to reduce cost.

AWS Services Supported

  • Standard Queues are supported by all AWS services
  • FIFO Queues are currently not supported by all AWS services like
    • CloudWatch Events
    • S3 Event Notifications
    • SNS Topic Subscriptions
    • Auto Scaling Lifecycle Hooks
    • AWS IoT Rule Actions
    • AWS Lambda Dead Letter Queues

Use Cases

  • Standard queues can be used in any scenarios, as long as the application can process messages that arrive more than once and out of order
    • Decouple live user requests from intensive background work: Let users upload media while resizing or encoding it.
    • Allocate tasks to multiple worker nodes: Process a high number of credit card validation requests.
    • Batch messages for future processing: Schedule multiple entries to be added to a database.
  • FIFO queues are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can’t be tolerated
    • Ensure that user-entered commands are executed in the right order.
    • Display the correct product price by sending price modifications in the right order.
    • Prevent a student from enrolling in a course before registering for an account.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

AWS SQS FIFO Queue – Certification

AWS SQS FIFO Queue

  • SQS FIFO Queue provides enhanced messaging between applications with the additional features
    • FIFO (First-In-First-Out) delivery
      • order in which messages are sent and received is strictly preserved
      • key when the order of operations & events is critical
    • Exactly-once processing
      • a message is delivered once and remains available until a consumer processes and deletes it
      • key when duplicates can’t be tolerated.
      • limited to 300 transactions per second (TPS)
  • FIFO queues provide all the capabilities as Standard queues and improves upon and complements the standard queue.
  • FIFO queues support message groups that allow multiple ordered message groups within a single queue.
  • FIFO queues are available in the US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) regions (keeps on changing)
  • FIFO Queue name should end with .fifo
  • SQS Buffered Asynchronous Client doesn’t currently support FIFO queues
  • Not all the AWS Services support FIFO like
    • Auto Scaling Lifecycle Hooks
    • Amazon CloudWatch Events
    • AWS IoT Rule Actions
    • AWS Lambda Dead-Letter Queues
    • Amazon S3 Event Notifications
    • Amazon SNS Topic Subscriptions

Message Deduplication

  • SQS APIs provide deduplication functionality that prevents message producer from sending duplicates.
  • Message deduplication ID is the token used for deduplication of sent messages.
  • If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren’t delivered during the 5-minute deduplication interval.
  • So basically, any duplicates introduced by the message producer are removed within a 5-minute deduplication interval
  • Message deduplication applies to an entire queue, not to individual message groups

Message groups

  • Messages are grouped into distinct, ordered “bundles” within a FIFO queue
  • Message group ID is the tag that specifies that a message belongs to a specific message group
  • For each message group ID, all messages are sent & received in strict order
  • However, messages with different message group ID values might be sent and received out of order.
  • Every message must be associated with a message group ID, without which the action fails.
  • SQS delivers the messages in the order in which they arrive for processing, if multiple hosts (or different threads on the same host) send messages with the same message group ID

Also Refer Blog Post – SQS Standard Queues vs FIFO Queues

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

AWS EC2 Container Service ECS – Certification

AWS EC2 Container Service ECS

  • AWS EC2 Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows running applications on a managed cluster of EC2 instances
  • ECS eliminates the need to install, operate, and scale the cluster management infrastructure.
  • ECS is a regional service that simplifies running application containers in a highly available manner across multiple AZs within a region
  • ECS helps schedule the placement of containers across the cluster based on the resource needs and availability requirements.
  • ECS allows integration of your own custom scheduler or third-party schedulers to meet business or application specific requirements.

ECS Elements

Containers and Images

  • Applications deployed on ECS must be architected to run in docker containers, which is a standardized unit of software development, containing everything that the software application needs to run: code, runtime, system tools, system libraries, etc.
  • Containers are created from a read-only template called an image.
  • Images are typically built from a Dockerfile, and stored in a registry from which they can be downloaded and run on your container instances.
  • ECS can be configured to access a private Docker image registry within a VPC, Docker Hub or is integrated with EC2 Container Registry (ECR)

Task Definitions

  • Task definition is needed to prepare application to run on ECS
  • Task definition is a text file in JSON format that describes one or more containers that form your application.
  • Task definitions specify various parameters for the application, such as containers to use, their repositories, ports to be opened, and data volumes

Tasks and Scheduling

  • A task is the instantiation of a task definition on a container instance within the cluster.
  • After a task definition is created for the application within ECS, you can specify the number of tasks that will run on the cluster.
  • ECS task scheduler is responsible for placing tasks on container instances, with several different scheduling options available

Clusters

  • Cluster is a logical grouping of EC2 instances to run tasks using ECS
  • ECS downloads the container images from the specified registry, and runs those images on the container instances within your cluster.

Container Agent

  • Container agent runs on each instance within an ECS cluster
  • Container Agent sends information about the instance’s current running tasks and resource utilization to ECS, and starts and stops tasks whenever it receives a request from ECS

ECS vs Elastic Beanstalk

  • ECS helps in having a more fine-grained control for custom application architectures.
  • Elastic Beanstalk is ideal to leverage the benefits of containers but just want the simplicity of deploying applications from development to production by uploading a container image.
  • Elastic Beanstalk is more of an application management platform that helps customers easily deploy and scale web applications and services.
  • With Elastic Beanstalk, specify container images to be deployed, with the CPU & memory requirements, port mappings and container links.
  • Elastic Beanstalk abstracts the finer details and automatically handles all the details such as provisioning an ECS cluster, balancing load, auto-scaling, monitoring, and placing the containers across the cluster.

ECS vs Lambda

  • EC2 Container Service is a highly scalable Docker container management service that allows running and managing distributed applications in Docker containers.
  • AWS Lambda is an event-driven task compute service that runs code (Lambda functions) in response to “events” from event sources like SES, SNS, DynamoDB & Kinesis Streams, CloudWatch etc.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS ELB Application Load Balancer – Certification

AWS ELB Application Load Balancer

  • An Application Load Balancer is a load balancing option for the ELB service that operates at the layer 7 (application layer) and allows defining routing rules based on content across multiple services or containers running on one or more EC2 instances.

Application Load Balancer Benefits

  • Support for Path-based routing,  where listener rules can be configured to forward requests based on the URL in the request. This enables structuring application as smaller services, and route requests to the correct service based on the content of the URL.
  • Support for routing requests to multiple services on a single EC2 instance by registering the instance using multiple ports.
  • Support for containerized applications. EC2 Container Service (ECS) can select an unused port when scheduling a task and register the task with a target group using this port, enabling efficient use of the clusters.
  • Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.

Application Load Balancer Features

  • supports load balancing of applications using HTTP and HTTPS (Secure HTTP) protocols
  • supports HTTP/2, which is enabled natively. Clients that support HTTP/2 can connect over TLS
  • supports WebSockets and Secure WebSockets natively
  • supports Request tracing, by default
  • supports Sticky Sessions (Session Affinity) using load balancer generated cookies, to route requests from the same client to the same target
  • supports SSL termination, to decrypt the request on ALB before sending it to the underlying targets.
  • supports layer 7 specific features like X-Forwarded-For headers to help determine the actual client IP, port and protocol
  • automatically scales its request handling capacity in response to incoming application traffic.
  • provides High Availability, by allowing you to specify more than one AZ and distribution of incoming traffic across multiple AZs.
  • integrates with ACM to provision and bind a SSL/TLS certificate to the load balancer thereby making the entire SSL offload process very easy
  • supports IPv6 addressing, for an Internet facing load balancer
  • supports Request Tracking, where in a new custom identifier “X-Amzn-Trace-Id” HTTP header is injected on all requests to help track in the request flow across various services
  • supports Security Groups to control the traffic allowed to and from the load balancer.
  • provides Access Logs, to record all requests sent the load balancer, and store the logs in S3 for later analysis in compressed format
  • provides Delete Protection, to prevent the ALB from accidental deletion
  • supports Connection Idle Timeout – ALB maintains two connections for each request one with the Client (front end) and one with the target instance (back end). If no data has been sent or received by the time that the idle timeout period elapses, ALB closes the front-end connection
  • integrates with CloudWatch to provide metrics such as request counts, error counts, error types, and request latency
  • integrates with AWS WAF, a web application firewall that helps protect web applications from attacks by allowing rules configuration based on IP addresses, HTTP headers, and custom URI strings
  • integrates with CloudTrail to receive a history of ALB API calls made on the AWS account

Application Load Balancer Listeners

  • A listener is a process that checks for connection requests, using the configured protocol and port
  • Listener supports HTTP & HTTPS protocol with Ports from 1-65535
  • ALB supports SSL Termination for HTTPS listener, which helps to offload the work of encryption and decryption so that the targets can focus on their main work.
  • HTTPS listener supports exactly one SSL server certificate on the listener.
  • WebSockets with both HTTP and HTTPS listeners (Secure WebSockets)
  • Supports HTTP/2 with HTTPS listeners
    • 128 requests can be sent in parallel using one HTTP/2 connection.
    • ALB converts these to individual HTTP/1.1 requests and distributes them across the healthy targets in the target group using the round robin routing algorithm.
    • HTTP/2 uses front-end connections more efficiently resulting in fewer connections between clients and the load balancer.
    • Server-push feature of HTTP/2 is not supported
  • Each listener has a default rule, and can optionally define additional rules.
  • Each rule consists of a priority, action, optional host condition, and optional path condition.
    • Priority – Rules are evaluated in priority order, from the lowest value to the highest value. The default rule has lowest priority
    • Action – Each rule action has a type and a target group. Currently, the only supported type is forward, which forwards requests to the target group. You can change the target group for a rule at any time.
    • Condition – There are two types of rule conditions: host and path. When the conditions for a rule are met, then its action is taken
  • Host Condition or Host-based routing
    • Host conditions can be used to define rules that forward requests to different target groups based on the host name in the host header
    • This enables support for multiple domains using a single ALB for e.g. orders.example.com, images.example.com, registration.example.com
    • Each host condition has one hostname. If the hostname in
  • Path Condition or path-based routing
    • Path conditions can be used to define rules that forward requests to different target groups based on the URL in the request
    • Each path condition has one path pattern for e.g. example.com/orders, example.com/images, example.com/registration
    • If the URL in a request matches the path pattern in a listener rule exactly, the request is routed using that rule.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

ELB_Application_Load_Balancer

AWS Classic Load Balancer vs Application Load Balancer – Certification

AWS Classic Load Balancer vs Application Load Balancer

Elastic Load Balancing supports two types of load balancers: Application Load Balancers and Classic Load Balancers. While there is some overlap in the features, AWS does not maintain feature parity between the two types of load balancers. Content below lists down the feature comparison for both.

Usage Pattern

  • A Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances,
  • Application Load Balancer is ideal for microservices or container-based architectures where there is a need to route traffic to multiple services or load balance across multiple ports on the same EC2 instance.

AWS ELB Classic Load Balancer vs Application Load Balancer
Supported Protocols

  • Classic Load Balancer operates at layer 4 and supports HTTP, HTTPS, TCP, SSL while Application Load Balancer operates at layer 7 and supports HTTP, HTTPS, HTTP/2, WebSockets
  • If Layer-4 features are needed, Classic Load Balancers should be used

Supported Platforms

  • Classic Load Balancer supports both EC2-Classic and EC2-VPC while Application Load Balancer supports only EC2-VPC

Stick Sessions (Cookies)

  • Stick Sessions (Session Affinity) enables the load balancer to bind a user’s session to a specific instance, which ensures that all requests from the user during the session are sent to the same instance
  • Both Classic & Application Load Balancer supports sticky sessions to maintain session affinity

Idle Connection Timeout

  • Idle Connection Timeout helps specify a time period, which ELB uses to close the connection if no data has been sent or received by the time that the idle timeout period elapses
  • Both Classic & Application Load Balancer supports idle connection timeout

Connection Draining

  • Connection draining enables the load balancer to complete in-flight requests made to instances that are de-registering or unhealthy
  • Both Classic & Application Load Balancer supports connection draining

SSL Termination

  • Both Classic Load Balancer and ALB support SSL Termination to decrypt requests from clients before sending them to targets and hence reducing the load. SSL certificate must be installed on the load balancer.

Back-end Server Authentication

  • Back-end Server Authentication enables authentication of the instances. Load balancer communicates with an instance only if the public key that the instance presents to the load balancer matches a public key in the authentication policy for the load balancer.
  • Classic Load Balancer supports while Application Load Balancer does not support Back-end Server Authentication

Cross-zone Load Balancing

  • Cross-zone Load Balancing help distribute incoming requests evenly across all instances in its enabled AZs. By default, Load Balancer will evenly distribute requests evenly across its enabled AZs, irrespective of the instances it hosts.
  • Both Classic & Application Load Balancer both support Cross-zone load balancing, however for Classic it needs to be enabled while for ALB it is always enabled

Health Checks

  • Both Classic & Application Load Balancer both support Health checks to determine if the instance is healthy or unhealthy
  • ALB provides health check improvements that allow detailed error codes from 200-399 to be configured

CloudWatch Metrics

  • Both Classic & Application Load Balancer integrate with CloudWatch to provide metrics, with ALB providing additional metrics

Access Logs

  • Access logs capture detailed information about requests sent to the load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses
  • Both Classic & Application Load Balancer provide access logs, with ALB providing additional attributes

Host-based Routing & Path-based Routing

  • Host-based routing use host conditions to define rules that forward requests to different target groups based on the host name in the host header. This enables ALB to support multiple domains using a single load balancer.
  • Path-based routing use path conditions to define rules that forward requests to different target groups based on the URL in the request. Each path condition has one path pattern. If the URL in a request matches the path pattern in a listener rule exactly, the request is routed using that rule.
  • Only ALB supports Host-based & Path-based routing.

Dynamic Ports

  • Only ALB supports Dynamic Port Mapping with ECS, which allows two containers of a service to run on a single server on dynamic ports that ALB automatically detects and reconfigures itself.

Deletion Protection

  • Only ALB supports Deletion Protection, wherein a load balancer can’t be deleted if deletion protection is enabled

Request Tracing

  • Only ALB supports Request Tracing to track HTTP requests from clients to targets or other services.

IPv6 in VPC

  • Only ALB supports IPv6 in VPC

AWS WAF

  • Only ALB supports AWS WAF, which can be directly used on ALBs (both internal and external) in a VPC, to protect websites and web services

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

AWS API Gateway – Certification

AWS API Gateway

  • AWS API Gateway is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale
  • API Gateway handles all of the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.
  • API Gateway has no minimum fees or startup costs and charges only for the API calls received and the amount of data transferred out.
  • API Gateway acts as a proxy to the configured backend operations.
  • API Gateway scales automatically to handle the amount of traffic the API receives
  • API Gateway expose HTTPS endpoints only for all the APIs created does not support unencrypted (HTTP) endpoints
  • APIs built on API Gateway can accept any payloads sent over HTTP with typical data formats include JSON, XML, query string parameters, and request headers.
  • API Gateway can communicate to multiple backends
    • Lambda functions
    • AWS Step functions state machines
    • HTTP endpoints exposed through Elastic Beanstalk, ELB or EC2 instances
    • non-AWS hosted HTTP based operations accessible via public Internet
  • Amazon API Gateway endpoints are always public to the Internet and does not run within an VPC. Proxy requests to backend operations also need to be publicly accessible on the Internet.

API Gateway

API Gateway helps with several aspects of creating and managing APIs

  • Metering
    • automatically meters traffic to your APIs and lets you extract utilization data for each API key.
    • define plans that meter, restrict third-party developer access, configure throttling, and quota limits on a per API key basis
  • Security
    • helps removing authorization concerns from the backend code
    • allows you to leverage AWS administration and security tools, such as IAM and Cognito, to authorize access to APIs
    • can verify signed API calls on your behalf using the same methodology AWS uses for its own APIs
    • supports custom authorizers written as Lambda functions and verify incoming bearer tokens
    • automatically protects the backend systems from distributed denial-of-service (DDoS) attacks, whether attacked with counterfeit requests (Layer 7) or SYN floods (Layer 3).
  • Resiliency
    • helps manage traffic with throttling so that backend operations can withstand traffic spikes
    • helps improve the performance of the APIs and the latency end users experience by caching the output of API calls to avoid calling the backend every time.
  • Operations Monitoring
    • integrates with CloudWatch and provides a metrics dashboard to monitor calls to API services
    • integrates with CloudWatch Logs to receive error, access or debug logs
    • provides with backend performance metrics covering API calls, latency data and error rates.
  • Lifecycle Management
    • allows multiple API versions and multiple stages (development, staging, production etc.) for each version simultaneously so that existing applications can continue to call previous versions after new API versions are published.
    • saves the history of the deployments, which allows rollback of a stage to a previous deployment at any point, using APIs or console
  • Designed for Developers
    • allows you to specify a mapping template to generate static content to be returned, helping you mock APIs before the backend is ready
    • helps reduce cross-team development effort and time-to-market for applications and allow dependent teams to begin development while backend processes is still built

API Gateway Throttling and Caching

API Gateway Throttling and Caching

  • Throttling
    • API Gateway provides throttling at multiple levels including global and by service call and limits can be set for standard rates and bursts
    • It tracks the number of requests per second. Any requests over the limit will receive a 429 HTTP response
    • Throttling ensures that API traffic is controlled to help the backend services maintain performance and availability.
  • Caching
    • API Gateway provides API result caching by provisioning an API Gateway cache and specifying its size in gigabytes
    • Caching helps improve performance and reduces the traffic sent to the back end
  • API Gateway handles the request in the following manner
    • If caching is not enabled and throttling limits have not been applied, then all requests pass through to the backend service until the account level throttling limits are reached.
    • If throttling limits specified, then API Gateway will shed necessary amount of requests and send only the defined limit to the back-end
    • If a cache is configured, then API Gateway will return a cached response for duplicate requests for a customizable time, but only if under configured throttling limits
  •  and does not arbitrarily limit or throttle invocations to the backend operations and all requests that are not intercepted by throttling and caching settings are sent to your backend operations.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_API_Gateway_Developer_Guide

AWS Lambda – Certification

AWS Lambda

  • AWS Lambda offers Serverless computing that allows you to build and run applications and services without thinking about servers, which are managed by AWS
  • Lambda lets you run code without provisioning or managing servers, where you pay only for the compute time when the code is running.
  • Lambda is priced on a pay per use basis and there are no charges when the code is not running
  • Lambda allows you to run code for any type of application or backend service with zero administration
  • Lambda performs all the operational and administrative activities on your behalf, including capacity provisioning, monitoring fleet health, applying security patches to the underlying compute resources, deploying code, running a web service front end, and monitoring and logging the code.
  • Lambda provides easy scaling and high availability to your code without additional effort on your part.
  • Lambda does not provide access to the underlying compute infrastructure
  • Lambda is designed to process events within milliseconds. Latency will be higher immediately after a Lambda function is created, updated, or if it has not been used recently.
  • Lambda is designed to use replication and redundancy to provide high availability for both the service itself and for the Lambda functions it operates. There are no maintenance windows or scheduled downtimes for either.
  • Lambda stores code in S3 and encrypts it at rest and performs additional integrity checks while the code is in use.
  • Lambda supports code written in Node.js (JavaScript), Python, Java (Java 8 compatible), and C# (.NET Core)
  • All calls made to AWS Lambda must complete execution within 300 seconds. The default timeout is 3 seconds, but you can set the timeout to any value between 1 and 300 seconds.

Lambda Functions & Event Sources

Core components of Lambda are Lambda functions and event sources.

  • An event source is the AWS service or custom application that publishes events
  • Lambda function is the custom code that processes the events

Lambda Functions

  • Each Lambda function has associated configuration information, such as its name, description, entry point, and resource requirements
  • Lambda functions should be stateless, to allow AWS Lambda launch as many copies of the function as needed as per the demand. State can be maintained externally in DynamoDB or S3
  • Each Lambda function receives 500MB of non-persistent disk space in its own /tmp directory.
  • Lambda functions have the following restrictions
    • Inbound network connections are blocked by AWS Lambda
    • Outbound connections only TCP/IP sockets are supported
    • ptrace (debugging) system calls are blocked
    • TCP port 25 traffic is also blocked as an anti-spam measure.
  • Lambda automatically monitors Lambda functions, reporting real-time metrics through CloudWatch, including total requests, latency, error rates, and throttled requests
  • Lambda automatically integrates with Amazon CloudWatch logs, creating a log group for each Lambda function and providing basic application lifecycle event log entries, including logging the resources consumed for each use of that function
  • Each AWS Lambda function has a single, current version of the code and there is no versioning of the same function. However, versioning can be implemented using Aliases.
    • Each Lambda function version has a unique ARN and after it is published it is immutable (that is, it can’t be changed).
    • Lambda supports creating aliases for each Lambda function versions.
    • Conceptually, an AWS Lambda alias is a pointer to a specific Lambda function version, but it is also a resource similar to a Lambda function, and each alias has a unique ARN.
    • Each alias maintains an ARN for a function version to which it points
    • An alias can only point to a function version, not to another alias
    • Unlike versions, which are immutable, aliases are mutable (that is, they can be changed) and can be updated to point to different versions
  • For failures, Lambda functions being invoked asynchronously are retried twice. Events from Kinesis and DynamoDB streams are retried until the Lambda function succeeds or the data expires. Kinesis and DynamoDB Streams retain data for a minimum of 24 hours.

Lambda Event Sources

Refer Blog Post – Lambda Event Source

Lambda Best Practices

  • Lambda function code should be stateless, and ensure there is no affinity between the code and the underlying compute infrastructure.
  • Instantiate AWS clients outside the scope of the handler to take advantage of connection re-use.
  • Make sure you have set +rx permissions on your files in the uploaded ZIP to ensure Lambda can execute code on your behalf.
  • Lower costs and improve performance by minimizing the use of startup code not directly related to processing the current event.
  • Use the built-in CloudWatch monitoring of your Lambda functions to view and optimize request latencies.
  • Delete old Lambda functions that you are no longer using.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 400 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?
    1. Your API Gateway deployment is throttling your requests.
    2. Your AWS API Gateway Deployment is bottlenecking on request (de)serialization.
    3. You did not request a limit increase on concurrent Lambda function executions. (Refer link – AWS API Gateway by default throttles at 500 requests per second steady-state, and 1000 requests per second at spike. Lambda, by default, throttles at 100 concurrent requests for safety. At 500 milliseconds (half of a second) per request, you can expect to support 200 requests per second at 100 concurrency. This is less than the 400 requests per second your system now requires. Make a limit increase request via the AWS Support Console.)
    4. You used Consistent Read requests on DynamoDB and are experiencing semaphore lock.

AWS Lambda Event Source – Certification

AWS Lambda Event Source

  • Core components of Lambda are Lambda functions and event sources.
    • An AWS Lambda event source is the AWS service or custom application that publishes events
    • Lambda function is the custom code that processes the events
  • An event source is an AWS service or developer-created application that produces events that trigger an AWS Lambda function to run
  • Supported event sources refer to those AWS services that can be preconfigured to work with AWS Lambda for e.g., S3, SNS, SES etc
  • Event sources can be either AWS Services or Custom applications

Lambda Event Source Mapping

  • Lambda Event source mapping refers to the configuration which maps an event source to a Lambda function.
  • Event source mapping enables automatic invocation of the Lambda function when events occur.
  • Each event source mapping identifies the type of events to publish and the Lambda function to invoke when events occur
  • AWS supported event sources can grouped into
    • Regular AWS services
      • also referred to as Push model
      • includes services like S3, SNS, SES etc.
      • event source mapping maintained on their side
      • as the event sources invoke the Lambda function, resource-based policy should be used to grant the event source necessary permissions
    • Stream-based event sources
      • also referred to as Pull model
      • includes services like DynamoDB & Kinesis streams
      • need to have the event source mapping maintained on the Lambda side

Lambda Supported Event Sources

AWS Lambda can be configured as an event source for multiple AWS services

Amazon S3

  • S3 bucket events, such as the object-created or object-deleted events can be processed using Lambda functions for e.g., Lambda function can be invoke when a user uploads a photo to a bucket to read the image and create a thumbnail
  • S3 bucket notification configuration feature can be configured for the event source mapping, to identify the S3 bucket events and the Lambda function to invoke.
  • Error handling for a given event source depends on how Lambda is invoked.
  • S3 invokes your Lambda function asynchronously.

AWS Lambda S3

Amazon DynamoDB

  • Lambda functions can be used as triggers for DynamoDB table to take custom actions in response to updates made to the DynamoDB table.
  • Trigger can be created by
    • First enabling Amazon DynamoDB Streams for the table.
    • Lambda then polls the stream and the Lambda function processes any updates published to the stream.
  • DynamoDB is a stream-based event source and with stream based service, the event source mapping is created in Lambda, identifying the stream to poll and which Lambda function to invoke.
  • Error handling for a given event source depends on how Lambda is invoked.

Amazon Kinesis Streams

  • AWS Lambda can be configured to automatically poll the Kinesis stream periodically (once per second) for new records
  • Lambda can then process any new records such as website click streams, financial transactions, social media feeds, IT logs, and location-tracking events.
  • Kinesis Streams is a stream-based event source and with stream based service, the event source mapping is created in Lambda, identifying the stream to poll and which Lambda function to invoke.
  • Error handling for a given event source depends on how Lambda is invoked.

AWS Lambda Kinesis

Amazon Simple Notification Service

  • Simple Notification Service notifications can be process using Lambda
  • When a message is published to an SNS topic, the service can invoke Lambda function by passing the message payload as parameter, which can then process the event
  • Lambda function can be triggered in response to CloudWatch alarms and other AWS services that use Amazon SNS.
  • SNS via topic subscription configuration feature can be used for the event source mapping, to identify the SNS topic and the Lambda function to invoke.
  • Error handling for a given event source depends on how Lambda is invoked.
  • SNS invokes your Lambda function asynchronously.

Amazon Simple Email Service

  • SES can be used to receive messages and can be configured to invoke Lambda function when messages arrive, by passing in the incoming email event as parameter
  • SES using the rule configuration feature can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • SES invokes your Lambda function asynchronously.

Amazon Cognito

  • Cognito Events feature enables Lambda function to run in response to events in Cognito for e.g. Lambda function can be invoked for the Sync Trigger events, that is published each time a dataset is synchronized.
  • Cognito event subscription configuration feature can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • Cognito is configured to invoke a Lambda function synchronously

AWS CloudFormation

  • Lambda function can be specified as a custom resource to execute any custom commands as a part of deploying CloudFormation stacks and can be invoked whenever the stacks are created, updated or deleted.
  • CloudFormation using stack definition can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • CloudFormation invokes the Lambda function asynchronously

Amazon CloudWatch Logs

  • Lambda functions can be used to perform custom analysis on CloudWatch Logs using CloudWatch Logs subscriptions.
  • CloudWatch Logs subscriptions provide access to a real-time feed of log events from CloudWatch Logs and deliver it to the AWS Lambda function for custom processing, analysis, or loading to other systems.
  • CloudWatch Logs using the log subscription configuration can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • CloudWatch Logs invokes the Lambda function asynchronously

Amazon CloudWatch Events

  • CloudWatch Events help respond to state changes in the AWS resources. When the resources change state, they automatically send events into an event stream.
  • Rules that match selected events in the stream can be created to route them to the Lambda function to take action for e.g., Lambda function can be invoked to log the state of an EC2 instance or AutoScaling Group
  • CloudWatch Events by using a rule target definition can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • CloudWatch Events invokes the Lambda function asynchronously

AWS CodeCommit

  • Trigger can be created for an CodeCommit repository so that events in the repository will invoke a Lambda function for e.g., Lambda function can be invoked when a branch or tag is created or when a push is made to an existing branch.
  • CodeCommit by using a repository trigger can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • CodeCommit Events invokes the Lambda function asynchronously

Scheduled Events (powered by Amazon CloudWatch Events)

  • AWS Lambda can be invoke regularly on a scheduled basis using the schedule event capability in CloudWatch Events.
  • CloudWatch Events by using a rule target definition can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • CloudWatch Events invokes the Lambda function asynchronously

AWS Config

  • Lambda functions can be used to evaluate whether the AWS resource configurations comply with custom Config rules.
  • As resources are created, deleted, or changed, AWS Config records these changes and sends the information to the Lambda functions, which can then evaluate the changes and report results to AWS Config. AWS Config can be used to assess overall resource compliance
  • AWS Config by using a rule target definition can be used for the event source mapping
  • Error handling for a given event source depends on how Lambda is invoked.
  • AWS Config invokes the Lambda function asynchronously

Amazon API Gateway

  • Lambda function can be invoked over HTTPS by defining a custom REST API and endpoint using Amazon API Gateway.
  • Individual API operations, such as GET and PUT, can be mapped to specific Lambda functions. When an HTTPS request to the API endpoint is received, the Amazon API Gateway service invokes the corresponding Lambda function.
  • Error handling for a given event source depends on how Lambda is invoked.
  • Amazon API Gateway is configured to invoke a Lambda function synchronously.

Other Event Sources: Invoking a Lambda Function On Demand

  • Lambda functions can be invoked on demand without the need to preconfigure any event source mapping in this case.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_Lambda_Developer_Guide