AWS Relational Database Service – RDS

Relational Database Service – RDS

  • Relational Database Service – RDS is a web service that makes it easier to set up, operate, and scale a relational database in the cloud.
  • provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks such as hardware provisioning, database setup, patching, and backups.
  • features & benefits
    • CPU, memory, storage, and IOPs can be scaled independently.
    • manages backups, software patching, automatic failure detection, and recovery.
    • automated backups can be performed as needed, or manual backups can be triggered as well. Backups can be used to restore a database, and the restore process works reliably and efficiently.
    • provides Multi-AZ high availability with a primary instance and a synchronous standby secondary instance that can failover seamlessly when a problem occurs.
    • provides elasticity & scalability by enabling Read Replicas to increase read scaling.
    • supports MySQL, MariaDB, PostgreSQL, Oracle, and Microsoft SQL Server, and the new, MySQL-compatible Aurora DB engine
    • supports IAM users and permissions to control who has access to the RDS database service
    • databases can be further protected by putting them in a VPC, using SSL for data in transit and encryption for data in rest
    • However, as it is a managed service, shell (root ssh) access to DB instances is not provided, and this restricts access to certain system procedures and tables that require advanced privileges.

RDS Components

  • DB Instance
    • is a basic building block of RDS
    • is an isolated database environment in the cloud
    • each DB instance runs a DB engine. AWS currently supports MySQL, MariaDB, PostgreSQL, Oracle, and Microsoft SQL Server & Aurora DB engines
    • can be accessed from AWS command-line tools, RDS APIs, or the AWS Management RDS Console.
    • computation and memory capacity of a DB instance is determined by its DB instance class, which can be selected as per the needs
    • supports three storage types: Magnetic, General Purpose (SSD), and Provisioned IOPS (SSD), which differ in performance and price
    • each DB instance has a DB instance identifier, which is a customer-supplied name and must be unique for that customer in an AWS region. It uniquely identifies the DB instance when interacting with the RDS API and AWS CLI commands.
    • each DB instance can host multiple user-created databases or a single Oracle database with multiple schemas.
    • can be hosted in an AWS VPC environment for better control
  • Regions and Availability Zones
    • AWS resources are housed in highly available data center facilities in different areas of the world, these data centers are called regions which further contain multiple distinct locations called Availability Zones
    • Each AZ is engineered to be isolated from failures in other AZs and to provide inexpensive, low-latency network connectivity to other AZs in the same region
    • DB instances can be hosted in different AZs, an option called a Multi-AZ deployment.
      • RDS automatically provisions and maintains a synchronous standby replica of the DB instance in a different AZ.
      • Primary DB instance is synchronously replicated across AZs to the standby replica
      • Provides data redundancy, failover support, eliminates I/O freezes, and minimizes latency spikes during system backups.
  • Security Groups
    • security group controls the access to a DB instance, by allowing access to the specified IP address ranges or EC2 instances
  • DB Parameter Groups
    • A DB parameter group contains engine configuration values that can be applied to one or more DB instances of the same instance type
    • help define configuration values specific to the selected DB Engine for e.g. max_connections, force_ssl , autocommit
    • supports default parameter group, which cannot be edited.
    • supports custom parameter group, to override values
    • supports static and dynamic parameter groups
      • changes to dynamic parameters are applied immediately (irrespective of apply immediately setting)
      • changes to static parameters are NOT applied immediately and require a manual reboot.
  • DB Option Groups
    • Some DB engines offer tools or optional features that simplify managing the databases and making the best use of data.
    • RDS makes such tools available through option groups for e.g. Oracle Application Express (APEX), SQL Server Transparent Data Encryption, and MySQL Memcached support.

RDS Interfaces

  • RDS can be interacted with multiple interfaces
    • AWS RDS Management console
    • Command Line Interface
    • Programmatic Interfaces which include SDKs, libraries in different languages, and RDS API

RDS Multi-AZ & Read Replicas

  • Multi-AZ deployment
    • provides high availability, durability, and automatic failover support
    • helps improve the durability and availability of a critical system, enhancing availability during planned system maintenance, DB instance failure, and Availability Zone disruption.
    • automatically provisions and manages a synchronous standby instance in a different AZ.
    • automatically fails over in case of any issues with the primary instance
    • A Multi-AZ DB instance deployment has one standby DB instance that provides failover support but doesn’t serve read traffic.
    • A Multi-AZ DB cluster deployment has two standby DB instances that provide failover support and can also serve read traffic.
  • Read replicas
    • enable increased scalability and database availability in the case of an AZ failure.
    • allow elastic scaling beyond the capacity constraints of a single DB instance for read-heavy database workloads

RDS Security

  • DB instance can be hosted in a VPC for the greatest possible network access control.
  • IAM policies can be used to assign permissions that determine who is allowed to manage RDS resources.
  • Security groups allow control of what IP addresses or EC2 instances can connect to the databases on a DB instance.
  • RDS supports encryption in transit using SSL connections
  • RDS supports encryption at rest to secure instances and snapshots at rest.
  • Network encryption and transparent data encryption (TDE) with Oracle DB instances
  • Authentication can be implemented using Password, Kerberos, and IAM database authentication.

RDS Backups, Snapshot

  • Automated backups
    • are enabled by default for a new DB instance.
    • enables recovery of the database to any point in time, with database change logs, during the backup retention period, up to the last five minutes of database usage.
  • DB snapshots are manual, user-initiated backups that enable backup of the DB instance to a known state, and restore to that specific state at any time.

RDS Monitoring & Notification

  • RDS integrates with CloudWatch and provides metrics for monitoring
  • CloudWatch alarms can be created over a single metric that sends an SNS message when the alarm changes state
  • RDS also provides SNS notification whenever any RDS event occurs
  • RDS Performance Insights is a database performance tuning and monitoring feature that helps illustrate the database’s performance and help analyze any issues that affect it
  • RDS Recommendations provides automated recommendations for database resources.

RDS Pricing

  • Instance class
    • Pricing is based on the class (e.g., micro) of the DB instance consumed.
  • Running time
    • Usage is billed in one-second increments, with a minimum of 10 mins.
  • Storage
    • Storage capacity provisioned for the DB instance is billed per GB per month
    • If the provisioned storage capacity is scaled within the month, the bill will be pro-rated.
  • I/O requests per month
    • Total number of storage I/O requests made in a billing cycle.
  • Provisioned IOPS (per IOPS per month)
    • Provisioned IOPS rate, regardless of IOPS consumed, for RDS Provisioned IOPS (SSD) storage only.
    • Provisioned storage for EBS volumes is billed in one-second increments, with a minimum of 10 minutes.
  • Backup storage
    • Automated backups & any active database snapshots consume storage
    • Increasing backup retention period or taking additional database snapshots increases the backup storage consumed by the database.
    • RDS provides backup storage up to 100% of the provisioned database storage at no additional charge for e.g., if you have 10 GB-months of provisioned database storage, RDS provides up to 10 GB-months of backup storage at no additional charge.
    • Most databases require less raw storage for a backup than for the primary dataset, so if multiple backups are not maintained, you will never pay for backup storage.
    • Backup storage is free only for active DB instances.
  • Data transfer
    • Internet data transfer out of the DB instance.
  • Reserved Instances
    • In addition to regular RDS pricing, reserved DB instances can be purchased

Further Reading

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What does Amazon RDS stand for?
    1. Regional Data Server.
    2. Relational Database Service
    3. Regional Database Service.
  2. How many relational database engines does RDS currently support?
    1. MySQL, Postgres, MariaDB, Oracle, and Microsoft SQL Server
    2. Just two: MySQL and Oracle.
    3. Five: MySQL, PostgreSQL, MongoDB, Cassandra and SQLite.
    4. Just one: MySQL.
  3. If I modify a DB Instance or the DB parameter group associated with the instance, should I reboot the instance for the changes to take effect?
    1. No
    2. Yes
  4. What is the name of licensing model in which I can use your existing Oracle Database licenses to run Oracle deployments on Amazon RDS?
    1. Bring Your Own License
    2. Role Bases License
    3. Enterprise License
    4. License Included
  5. Will I be charged if the DB instance is idle?
    1. No
    2. Yes
    3. Only is running in GovCloud
    4. Only if running in VPC
  6. What is the minimum charge for the data transferred between Amazon RDS and Amazon EC2 Instances in the same Availability Zone?
    1. USD 0.10 per GB
    2. No charge. It is free.
    3. USD 0.02 per GB
    4. USD 0.01 per GB
  7. Does Amazon RDS allow direct host access via Telnet, Secure Shell (SSH), or Windows Remote Desktop Connection?
    1. Yes
    2. No
    3. Depends on if it is in VPC or not
  8. What are the two types of licensing options available for using Amazon RDS for Oracle?
    1. BYOL and Enterprise License
    2. BYOL and License Included
    3. Enterprise License and License Included
    4. Role based License and License Included
  9. A user plans to use RDS as a managed DB platform. Which of the below mentioned features is not supported by RDS?
    1. Automated backup
    2. Automated scaling to manage a higher load
    3. Automated failure detection and recovery
    4. Automated software patching
  10. A user is launching an AWS RDS with MySQL. Which of the below mentioned options allows the user to configure the InnoDB engine parameters?
    1. Options group
    2. Engine parameters
    3. Parameter groups
    4. DB parameters
  11. A user is planning to use the AWS RDS with MySQL. Which of the below mentioned services the user is not going to pay?
    1. Data transfer
    2. RDS CloudWatch metrics
    3. Data storage
    4. I/O requests per month

References

AWS_Relational_Database_Service_RDS

AWS RDS Best Practices

AWS RDS Best Practices

AWS recommends RDS best practices in terms of Monitoring, Performance, and security

RDS Basic Operational Guidelines

  • Monitoring
    • Memory, CPU, and storage usage should be monitored.
    • CloudWatch can be setup for notifications when usage patterns change or when the capacity of deployment is approached, so that system performance and availability can be maintained
  • Scaling
    • Scale up the DB instance when approaching storage capacity limits.
    • There should be some buffer in storage and memory to accommodate unforeseen increases in demand from the applications.
  • Backups
    • Enable Automatic Backups and set the backup window to occur during the daily low in WriteIOPS.
    • Use Multi-AZ to reduce to impact of backups on the primary DB instance.
  • On a MySQL DB instance,
    • Do not create more than 10,000 tables using Provisioned IOPS or 1000 tables using standard storage. Large numbers of tables will significantly increase database recovery time after a failover or database crash. If you need to create more tables than recommended, set the innodb_file_per_table parameter to 0.
    • Avoid tables in the database growing too large. Provisioned storage limits restrict the maximum size of a MySQL table file to 6 TB. Instead, partition the large tables so that file sizes are well under the 6 TB limit. This can also improve performance and recovery time.
  • Performance
    • If the database workload requires more I/O than provisioned, recovery after a failover or database failure will be slow.
    • To increase the I/O capacity of a DB instance,
      • Migrate to a DB instance class with High I/O capacity.
      • Convert from standard storage to Provisioned IOPS storage, and use a DB instance class that is optimized for Provisioned IOPS.
      • if using Provisioned IOPS storage, provision additional throughput capacity.
  • Multi-AZ & Failover
    • Deploy applications in all Availability Zones, so if an AZ goes down, applications in other AZs will still be available.
    • Use RDS DB events to monitor failovers.
    • Set a TTL of less than 30 seconds, if the client application is caching the DNS data of the DB instances. As the underlying IP address of a DB instance can change after a failover, caching the DNS data for an extended time can lead to connection failures if the application tries to connect to an IP address that no longer is in service.
    • Multi-AZ requires the transaction logging feature to be enabled. Do not use features like Simple recover mode, offline mode or Read-only mode which turn of transaction logging.
    • To shorten failover time
      • Ensure that sufficient Provisioned IOPS allocated for your workload. Inadequate I/O can lengthen failover times. Database recovery requires I/O.
      • Use smaller transactions. Database recovery relies on transactions, so break up large transactions into multiple smaller transactions to shorten failover time
    • Test failover for your DB instance to understand how long the process takes for your use case and to ensure that the application that accesses your DB instance can automatically connect to the new DB instance after failover.

DB Instance RAM Recommendations

  • An RDS performance best practice is to allocate enough RAM so that the working set resides almost completely in memory.
  • Value of ReadIOPS should be small and stable.
  • ReadIOPS metric can be checked, using AWS CloudWatch while the DB instance is under load, to tell if the working set is almost all in memory
  • If scaling up the DB instance class with more RAM, results in a dramatic drop in ReadIOPS, the working set was not almost completely in memory.
  • Continue to scale up until ReadIOPS no longer drops dramatically after a scaling operation, or ReadIOPS is reduced to a very small amount.

RDS Security Best Practices

  • Do not use AWS root credentials to manage RDS resources, and IAM users should be created for everyone,
  • Grant each user the minimum set of permissions required to perform his or her duties.
  • Use IAM groups to effectively manage permissions for multiple users.
  • Rotate your IAM credentials regularly.

Using Enhanced Monitoring to Identify Operating System Issues

  • RDS provides metrics in real time for the operating system (OS) that your DB instance runs on.
  • Enhanced monitoring is available for all DB instance classes except for db.t1.micro and db.m1.small.

Using Metrics to Identify Performance Issues

  • To identify performance issues caused by insufficient resources and other common bottlenecks, you can monitor the metrics available for your Amazon RDS DB instance
  • Performance metrics should be monitored on a regular basis to benchmark the average, maximum, and minimum values for a variety of time ranges. to help identify performance degradation.
  • CloudWatch alarms can be set for particular metric thresholds to be alerted when they are reached or breached
  • A DB instance has a number of different categories of metrics which includes CPU, memory, disk space, IOPS, db connections and network traffic, and how to determine acceptable values depends on the metric.
  • One of the best ways to improve DB instance performance is to tune the most commonly used and most resource-intensive queries to make them less expensive to run.

Recovery

  • MySQL
    • InnoDB is the recommended and supported storage engine for MySQL DB instances on Amazon RDS.
    • However, MyISAM performs better than InnoDB if you require intense, full-text search capability.
    • Point-In-Time Restore and snapshot restore features of Amazon RDS for MySQL require a crash-recoverable storage engine and are supported for the InnoDB storage engine only.
    • Although MySQL supports multiple storage engines with varying capabilities, not all of them are optimized for crash recovery and data durability.
    • MyISAM storage engine does not support reliable crash recovery and might prevent a Point-In-Time Restore or snapshot restore from working as intended which might result in lost or corrupt data when MySQL is restarted after a crash.
  • MariaDB
    • XtraDB is the recommended and supported storage engine for MariaDB DB instances on Amazon RDS.
    • Point-In-Time Restore and snapshot restore features of Amazon RDS for MariaDB require a crash-recoverable storage engine and are supported for the XtraDB storage engine only.
    • Although MariaDB supports multiple storage engines with varying capabilities, not all of them are optimized for crash recovery
      and data durability.
    • For e.g although Aria is a crash-safe replacement for MyISAM, it might still prevent a Point-In-Time Restore or snapshot restore from working as intended. This might result in lost or corrupt data when MariaDB is restarted after a crash.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You are running a database on an EC2 instance, with the data stored on Elastic Block Store (EBS) for persistence At times throughout the day, you are seeing large variance in the response times of the database queries Looking into the instance with the isolate command you see a lot of wait time on the disk volume that the database’s data is stored on. What two ways can you improve the performance of the database’s storage while maintaining the current persistence of the data? Choose 2 answers
    1. Move to an SSD backed instance
    2. Move the database to an EBS-Optimized Instance
    3. Use Provisioned IOPs EBS
    4. Use the ephemeral storage on an m2.4xLarge Instance Instead
  2. Amazon RDS automated backups and DB Snapshots are currently supported for only the __________ storage engine
    1. InnoDB
    2. MyISAM

References

AWS_RDS_Best_Practices

AWS Lambda

AWS Lambda

  • AWS Lambda offers Serverless computing that allows applications and services to be built and run without thinking about servers.
  • With serverless computing, the application still runs on servers, but all the server management is done by AWS.
  • helps run code without provisioning or managing servers, where you pay only for the compute time when the code is running.
  • is priced on a pay-per-use basis and there are no charges when the code is not running.
  • allows the running of code for any type of application or backend service with zero administration.
  • performs all the operational and administrative activities on your behalf, including capacity provisioning, monitoring fleet health, applying security patches to the underlying compute resources, deploying code, running a web service front end, and monitoring and logging the code.
  • does not provide access to the underlying compute infrastructure.
  • handles scalability and availability as it
    • provides easy scaling and high availability to the code without additional effort on your part.
    • is designed to process events within milliseconds.
    • is designed to run many instances of the functions in parallel.
    • is designed to use replication and redundancy to provide high availability for both the service and the functions it operates.
    • has no maintenance windows or scheduled downtimes for either.
    • has a default safety throttle for the number of concurrent executions per account per region.
    • has a higher latency immediately after a function is created, or updated, or if it has not been used recently.
    • for any function updates, there is a brief window of time, less than a minute, when requests would be served by both versions
  • Security
    • stores code in S3 and encrypts it at rest and performs additional integrity checks while the code is in use.
    • each function runs in its own isolated environment, with its own resources and file system view
    • supports Code Signing using AWS Signer, which offers trust and integrity controls that enable you to verify that only unaltered code from approved developers is deployed in the functions. 
  • Functions must complete execution within 900 seconds. The default timeout is 3 seconds. The timeout can be set the timeout to any value between 1 and 900 seconds.
  • AWS Step Functions can help coordinate a series of Lambda functions in a specific order. Multiple functions can be invoked sequentially, passing the output of one to the other, and/or in parallel, while the state is being maintained by Step Functions.
  • AWS X-Ray helps to trace functions, which provides insights such as service overhead, function init time, and function execution time.
  • Lambda Provisioned Concurrency provides greater control over the performance of serverless applications.
  • Lambda@Edge allows you to run code across AWS locations globally without provisioning or managing servers, responding to end-users at the lowest network latency.
  • Lambda Extensions allow integration of Lambda with other third-party tools for monitoring, observability, security, and governance.
  • Compute Savings Plan can help save money for Lambda executions.
  • CodePipeline and CodeDeploy can be used to automate the serverless application release process.
  • RDS Proxy provides a highly available database proxy that manages thousands of concurrent connections to relational databases.
  • Supports Elastic File Store , to provide a shared, external, persistent, scalable volume using a fully managed elastic NFS file system without the need for provisioning or capacity management.
  • supports Function URLs, a built-in HTTPS endpoint that can be invoked using the browser, curl, and any HTTP client. 

Functions & Event Sources

  • Core components of Lambda are functions and event sources.
    • Event source – an AWS service or custom application that publishes events.
    • Function – a custom code that processes the events.

Lambda Functions

  • Each function has associated configuration information, such as its name, description, runtime, entry point, and resource requirements
  • Lambda functions should be designed as stateless
    • to allow launching of as many copies of the function as needed as per the demand.
    • Local file system access, child processes, and similar artifacts may not extend beyond the lifetime of the request
    • The state can be maintained externally in DynamoDB or S3
  • Lambda Execution role can be assigned to the function to grant permission to access other resources.
  • Functions have the following restrictions
    • Inbound network connections are blocked
    • Outbound connections only TCP/IP sockets are supported
    • ptrace (debugging) system calls are blocked
    • TCP port 25 traffic is also blocked as an anti-spam measure.
  • Lambda may choose to retain an instance of the function and reuse it to serve a subsequent request, rather than creating a new copy.
  • Lambda Layers provide a convenient way to package libraries and other dependencies that you can use with your Lambda functions.
  • Function versions can be used to manage the deployment of the functions.
  • Function Alias supports creating aliases, which are mutable, for each function version.
  • Functions are automatically monitored, and real-time metrics are reported through CloudWatch, including total requests, latency, error rates, etc.
  • Lambda automatically integrates with CloudWatch logs, creating a log group for each function and providing basic application lifecycle event log entries, including logging the resources consumed for each use of that function.
  • Functions support code written in
    • Node.js (JavaScript)
    • Python
    • Ruby
    • Java (Java 8 compatible)
    • C# (.NET Core)
    • Go
    • Custom runtime
  • Container images are also supported.
  • Failure Handling
    • For S3 bucket notifications and custom events, Lambda will attempt execution of the function three times in the event of an error condition in the code or if a service or resource limit is exceeded.
    • For ordered event sources that Lambda polls, e.g. DynamoDB Streams and Kinesis streams, it will continue attempting execution in the event of a developer code error until the data expires.
    • Kinesis and DynamoDB Streams retain data for a minimum of 24 hours
    • Dead Letter Queues (SNS or SQS) can be configured for events to be placed, once the retry policy for asynchronous invocations is exceeded

Read in-depth  @ Lambda Functions

Lambda Event Sources

  • Event Source is an AWS service or developer-created application that produces events that trigger an AWS Lambda function to run
  • Event source mapping refers to the configuration which maps an event source to a Lambda function.
  • Event sources can be both push and pull sources
    • Services like S3, and SNS publish events to Lambda by invoking the cloud function directly.
    • Lambda can also poll resources in services like Kafka, and Kinesis streams that do not publish events to Lambda.

Read in-depth @ Event Sources

Lambda Execution Environment

  • Lambda invokes the function in an execution environment, which provides a secure and isolated runtime environment.
  • Execution Context is a temporary runtime environment that initializes any external dependencies of the Lambda function code, e.g. database connections or HTTP endpoints.
  • When a function is invoked, the Execution environment is launched based on the provided configuration settings i.e. memory and execution time.
  • After a Lambda function is executed, Lambda maintains the execution environment for some time in anticipation of another function invocation which allows it to reuse the /tmp directory and objects declared outside of the function’s handler method e.g. database connection.
  • When a Lambda function is invoked for the first time or after it has been updated there is latency for bootstrapping as Lambda tries to reuse the Execution Context for subsequent invocations of the Lambda function
  • Subsequent invocations perform better performance as there is no need to “cold-start” or initialize those external dependencies
  • Execution environment
    • takes care of provisioning and managing the resources needed to run the function.
    • provides lifecycle support for the function’s runtime and any external extensions associated with the function.
  • Function’s runtime communicates with Lambda using the Runtime API.
  • Extensions communicate with Lambda using the Extensions API.
  • Extensions can also receive log messages from the function by subscribing to logs using the Logs API.
  • Lambda manages Execution Environment creations and deletion, there is no AWS Lambda API to manage Execution Environment.

Lambda Execution Environment

Lambda in VPC

  • Lambda function always runs inside a VPC owned by the Lambda service which isn’t connected to your account’s default VPC
  • Lambda applies network access and security rules to this VPC and maintains and monitors the VPC automatically.
  • A function can be configured to be launched in private subnets in a VPC in your AWS account.
  • Function connected to VPC can access private resources databases, cache instances, or internal services during the execution.
  • To enable the function to access resources inside the private VPC, additional VPC-specific configuration information that includes private subnet IDs and security group IDs must be provided.
  • Lambda uses this information to set up ENIs that enables the function to connect securely to other resources within your private VPC.
  • Functions connected to VPC can’t access the Internet and need a NAT Gateway to access any external resources outside of AWS.
  • Functions cannot connect directly to a VPC with dedicated instance tenancy, instead, peer it to a second VPC with default tenancy.

Lambda Security

  • All data stored in ephemeral storage is encrypted at rest with a key managed by AWS.
  • Lambda functions provide access only to a single VPC. If multiple subnets are specified, they must all be in the same VPC. Other VPCs can be connected using VPC Peering.
  • Supports Code Signing using AWS Signer, which offers trust and integrity controls that enable you to verify that only unaltered code from approved developers is deployed in the functions. 
  • AWS Lambda can perform the following signature checks at deployment:
    • Corrupt signature – This occurs if the code artifact has been altered since signing.
    • Mismatched signature – This occurs if the code artifact is signed by a signing profile that is not approved.
    • Expired signature – This occurs if the signature is past the configured expiry date.
    • Revoked signature – This occurs if the signing profile owner revokes the signing jobs.
  • For sensitive information, for e.g. passwords, AWS recommends using client-side encryption using AWS Key Management Service – KMS and store the resulting values as ciphertext in your environment variable.
  • Function code should include the logic to decrypt these values.

Lambda Permissions

  • IAM – Use IAM to manage access to the Lambda API and resources like functions and layers.
  • Execution Role – A Lambda function can be provided with an Execution Role, that grants it permission to access AWS services and resources e.g. send logs to CloudWatch and upload trace data to AWS X-Ray.
  • Function Policy – Resource-based Policies
    • Use resource-based policies to give other accounts and AWS services permission to use the Lambda resources.
    • Resource-based permissions policies are supported for functions and layers.

Invoking Lambda Functions

  • Lambda functions can be invoked
    • directly using the Lambda console or API, a function URL HTTP(S) endpoint, an AWS SDK, the AWS CLI, and AWS toolkits.
    • other AWS services like S3 and SNS invoke the function.
    • to read from a stream or queue and invoke the function.
  • Functions can be invoked
    • Synchronously
      • You wait for the function to process the event and return a response.
      • Error handling and retries need to be handled by the Client.
      • Invocation includes API, and SDK for calls from API Gateway.
    • Asynchronously
      • queues the event for processing and returns a response immediately.
      • handles retries and can send invocation records to a destination for successful and failed events.
      • Invocation includes S3, SNS, and CloudWatch Events
      • can define DLQ for handling failed events. AWS recommends using destination instead of DLQ.

Lambda Provisioned Concurrency

  • Lambda Provisioned Concurrency provides greater control over the performance of serverless applications.
  • When enabled, Provisioned Concurrency keeps functions initialized and hyper-ready to respond in double-digit milliseconds.
  • Provisioned Concurrency is ideal for building latency-sensitive applications, such as web or mobile backends, synchronously invoked APIs, and interactive microservices.
  • The amount of concurrency can be increased during times of high demand and lowered or turn it off completely when demand decreases.
  • If the concurrency of a function reaches the configured level, subsequent invocations of the function have the latency and scale characteristics of regular functions.

Lambda@Edge

Read in-depth @ Lambda@Edge

Lambda Extensions

  • Lambda Extensions allow integration of Lambda with other third-party tools for monitoring, observability, security, and governance.

Lambda Best Practices

  • Lambda function code should be stateless and ensure there is no affinity between the code and the underlying compute infrastructure.
  • Instantiate AWS clients outside the scope of the handler to take advantage of connection re-use.
  • Make sure you have set +rx permissions on your files in the uploaded ZIP to ensure Lambda can execute code on your behalf.
  • Lower costs and improve performance by minimizing the use of startup code not directly related to processing the current event.
  • Use the built-in CloudWatch monitoring of the Lambda functions to view and optimize request latencies.
  • Delete old Lambda functions that you are no longer using.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 400 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?
    1. Your API Gateway deployment is throttling your requests.
    2. Your AWS API Gateway Deployment is bottlenecking on request (de)serialization.
    3. You did not request a limit increase on concurrent Lambda function executions. (Refer link – AWS API Gateway by default throttles at 500 requests per second steady-state, and 1000 requests per second at spike. Lambda, by default, throttles at 100 concurrent requests for safety. At 500 milliseconds (half of a second) per request, you can expect to support 200 requests per second at 100 concurrency. This is less than the 400 requests per second your system now requires. Make a limit increase request via the AWS Support Console.)
    4. You used Consistent Read requests on DynamoDB and are experiencing semaphore lock.

AWS Lambda Event Source

AWS Lambda Event Source

  • Lambda Event Source is an AWS service or developer-created application that produces events that trigger an AWS Lambda function to run.
  • Event sources can be either AWS Services or Custom applications.
  • Event sources can be both push and pull sources
    • Services like S3, and SNS publish events to Lambda by invoking the cloud function directly.
    • Lambda can also poll resources in services like Kafka, and Kinesis streams that do not publish events to Lambda.
  • Events are passed to a Lambda function as an event input parameter. For batch event sources, such as Kinesis Streams, the event parameter may contain multiple events in a single call, based on the requested batch size

Lambda Event Source Mapping

  • Lambda Event source mapping refers to the configuration which maps an event source to a Lambda function.
  • Event source mapping
    • enables automatic invocation of the Lambda function when events occur.
    • identifies the type of events to publish and the Lambda function to invoke when events occur.

Lambda Event Sources Type

AWS Lambda Event Source Types

Push-based

  • also referred to as the Push model
  • includes services like S3, SNS, SES, etc.
  • Event source mapping maintained on the event source side
  • as the event sources invoke the Lambda function, a resource-based policy should be used to grant the event source the necessary permissions.

Pull-based

  • also referred to as the Pull model
  • covers mostly the Stream-based event sources like DynamoDB, Kinesis streams, MQ, SQS, Kafka
  • Event source mapping maintained on the Lambda side

Lambda Event Sources Invocation Model

Synchronously

  • You wait for the function to process the event and return a response.
  • Error handling and retries need to be handled by the Client.
  • Invocation includes API, and SDK for calls from API Gateway.

Asynchronously

  • queues the event for processing and returns a response immediately.
  • handles retries and can send invocation records to a destination for successful and failed events.
  • Invocation includes S3, SNS, and CloudWatch Events

Lambda Supported Event Sources

AWS Lambda can be configured as an event source for multiple AWS services

Service Method of invocation
Amazon Alexa Event-driven; synchronous invocation
Amazon MSK – Managed Streaming for Apache Kafka Lambda polling
Self-managed Apache Kafka Lambda polling
Amazon API Gateway Event-driven; synchronous invocation
AWS CloudFormation Event-driven; asynchronous invocation
Amazon CloudFront (Lambda@Edge) Event-driven; synchronous invocation
Amazon EventBridge (CloudWatch Events) Event-driven; asynchronous invocation
Amazon CloudWatch Logs Event-driven; asynchronous invocation
AWS CodeCommit Event-driven; asynchronous invocation
AWS CodePipeline Event-driven; asynchronous invocation
Amazon Cognito Event-driven; synchronous invocation
AWS Config Event-driven; asynchronous invocation
Amazon Connect Event-driven; synchronous invocation
Amazon DynamoDB Lambda polling
Amazon Elastic File System Special integration
Elastic Load Balancing (Application Load Balancer) Event-driven; synchronous invocation
AWS IoT Event-driven; asynchronous invocation
AWS IoT Events Event-driven; asynchronous invocation
Amazon Kinesis Lambda polling
Amazon Kinesis Data Firehose Event-driven; synchronous invocation
Amazon Lex Event-driven; synchronous invocation
Amazon MQ Lambda polling
Amazon Simple Email Service Event-driven; asynchronous invocation
Amazon Simple Notification Service Event-driven; asynchronous invocation
Amazon Simple Queue Service Lambda polling
Amazon S3 Event-driven; asynchronous invocation
Amazon Simple Storage Service Batch Event-driven; synchronous invocation
Secrets Manager Event-driven; synchronous invocation
AWS X-Ray Special integration

Amazon S3

  • S3 bucket events, such as the object-created or object-deleted events can be processed using Lambda functions for e.g., the Lambda function can be invoked when a user uploads a photo to a bucket to read the image and create a thumbnail.
  • S3 bucket notification configuration feature can be configured for the event source mapping, to identify the S3 bucket events and the Lambda function to invoke.
  • Error handling for an event source depends on how Lambda is invoked
  • S3 invokes your Lambda function asynchronously.

DynamoDB

  • Lambda functions can be used as triggers for the DynamoDB table to take custom actions in response to updates made to the DynamoDB table.
  • Trigger can be created by
    • Enabling DynamoDB Streams for the table.
    • Lambda polls the stream and processes any updates published to the stream
  • DynamoDB is a stream-based event source and with stream-based service, the event source mapping is created in Lambda, identifying the stream to poll and which Lambda function to invoke.
  • Error handling for an event source depends on how Lambda is invoked

Kinesis Streams

  • AWS Lambda can be configured to automatically poll the Kinesis stream periodically (once per second) for new records.
  • Lambda can process any new records such as social media feeds, IT logs, website click streams, financial transactions, and location-tracking events
  • Kinesis Streams is a stream-based event source and with stream-based service, the event source mapping is created in Lambda, identifying the stream to poll and which Lambda function to invoke.
  • Error handling for an event source depends on how Lambda is invoked

Simple Notification Service – SNS

  • SNS notifications can be processed using Lambda
  • When a message is published to an SNS topic, the service can invoke Lambda function by passing the message payload as parameter, which can then process the event
  • Lambda function can be triggered in response to CloudWatch alarms and other AWS services that use SNS.
  • SNS via topic subscription configuration feature can be used for the event source mapping, to identify the SNS topic and the Lambda function to invoke
  • Error handling for an event source depends on how Lambda is invoked
  • SNS invokes your Lambda function asynchronously.

Simple Email Service – SES

  • SES can be used to receive messages and can be configured to invoke Lambda function when messages arrive, by passing in the incoming email event as parameter
  • SES using the rule configuration feature can be used for the event source mapping
  • Error handling for an event source depends on how Lambda is invoked
  • SES invokes your Lambda function asynchronously.

Amazon Cognito

  • Cognito Events feature enables Lambda function to run in response to events in Cognito for e.g. Lambda function can be invoked for the Sync Trigger events, that is published each time a dataset is synchronized.
  • Cognito event subscription configuration feature can be used for the event source mapping
  • Error handling for an event source depends on how Lambda is invoked
  • Cognito is configured to invoke a Lambda function synchronously

CloudFormation

  • Lambda function can be specified as a custom resource to execute any custom commands as a part of deploying CloudFormation stacks and can be invoked whenever the stacks are created, updated, or deleted.
  • CloudFormation using stack definition can be used for the event source mapping
  • Error handling for an event source depends on how Lambda is invoked
  • CloudFormation invokes the Lambda function asynchronously

CloudWatch Logs

  • Lambda functions can be used to perform custom analysis on CloudWatch Logs using CloudWatch Logs subscriptions.
  • CloudWatch Logs subscriptions provide access to a real-time feed of log events from CloudWatch Logs and deliver it to the AWS Lambda function for custom processing, analysis, or loading to other systems.
  • CloudWatch Logs using the log subscription configuration can be used for the event source mapping
  • Error handling for an event source depends on how Lambda is invoked
  • CloudWatch Logs invokes the Lambda function asynchronously

CloudWatch Events

  • CloudWatch Events help respond to state changes in the AWS resources. When the resources change state, they automatically send events into an event stream.
  • Rules that match selected events in the stream can be created to route them to the Lambda function to take action for e.g., the Lambda function can be invoked to log the state of an EC2 instance or AutoScaling Group.
  • CloudWatch Events by using a rule target definition can be used for the event source mapping
  • Error handling for an event source depends on how Lambda is invoked
  • CloudWatch Events invokes the Lambda function asynchronously

CodeCommit

  • Trigger can be created for a CodeCommit repository so that events in the repository will invoke a Lambda function for e.g., Lambda function can be invoked when a branch or tag is created or when a push is made to an existing branch.
  • CodeCommit by using a repository trigger can be used for the event source mapping
  • Error handling for an event source depends on how Lambda is invoked
  • CodeCommit Events invokes the Lambda function asynchronously

Scheduled Events (powered by CloudWatch Events)

  • AWS Lambda can be invoked regularly on a scheduled basis using the schedule event capability in CloudWatch Events.
  • CloudWatch Events by using a rule target definition can be used for the event source mapping
  • Error handling for an event source depends on how Lambda is invoked
  • CloudWatch Events invokes the Lambda function asynchronously

AWS Config

  • Lambda functions can be used to evaluate whether the AWS resource configurations comply with custom Config rules.
  • As resources are created, deleted, or changed, AWS Config records these changes and sends the information to the Lambda functions, which can then evaluate the changes and report results to AWS Config. AWS Config can be used to assess overall resource compliance
  • AWS Config by using a rule target definition can be used for the event source mapping
  • Error handling for an event source depends on how Lambda is invoked
  • AWS Config invokes the Lambda function asynchronously

Amazon API Gateway

  • Lambda function can be invoked over HTTPS by defining a custom REST API and endpoint using Amazon API Gateway.
  • Individual API operations, such as GET and PUT, can be mapped to specific Lambda functions.
  • When an HTTPS request to the API endpoint is received, the API Gateway service invokes the corresponding Lambda function.
  • Error handling for an event source depends on how Lambda is invoked.
  • API Gateway is configured to invoke a Lambda function synchronously.

Other Event Sources: Invoking a Lambda Function On Demand

  • Lambda functions can be invoked on-demand without the need to preconfigure any event source mapping in this case.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_Lambda_Developer_Guide

AWS Elastic Beanstalk Deployment Strategies

Elastic Beanstalk Deployment Methods

AWS Elastic Beanstalk Deployment Strategies

  • Elastic Beanstalk supports environments such as
    • Single Instance environments, with a single instance and Auto Scaling to maintain the minimum/maximum 1 instance
    • Load Balanced environments, with load balancing and Auto Scaling
  • Elastic Beanstalk allows multiple deployment options or strategies that can be selected depending upon the requirements for deployment time, downtime, DNS change, and rollback process.

Elastic Beanstalk Deployment Methods

Elastic Beanstalk Deployment Methods

All at Once Deployments

  • Elastic Beanstalk environment uses all-at-once deployments if it is created with a different client (API, SDK, or AWS CLI).
  • All at Once deployments perform an in-place deployment on all instances at the same time.
  • All at Once deployments are simple and fast, however, it would lead to downtime and the rollback would take time in case of any issues.

Rolling Deployments

  • Elastic Beanstalk environment uses rolling deployments if it is created with console or EB CLI.
  • Elastic Beanstalk splits the environment’s EC2 instances into batches and deploys the new version of the application on the existing instance one batch at a time, leaving the rest of the instances in the environment running the old version.
  • During a rolling deployment, part of the instances serves requests with the old version of the application, while instances in completed batches serve other requests with the new version.
  • Elastic Beanstalk performs the rolling deployments as
    • When processing a batch, detaches all instances in the batch from the load balancer, deploys the new application version, and then reattaches the instances.
    • To avoid any connection issues when the instances are detached, connection draining can be enabled on the load balancer
    • After reattaching the instances in a batch to the load balancer, ELB waits until they pass a minimum number of health checks (the Healthy check count threshold value), and then starts routing traffic to them.
    • Elastic Beanstalk waits until all instances in a batch are healthy before moving on to the next batch.
    • When all instances in the batch pass enough health checks to be considered healthy by ELB, the batch is complete.
    • If a batch of instances does not become healthy within the command timeout, the deployment fails.
    • If a deployment fails after one or more batches are completed successfully, the completed batches run the new version of the application while any pending batches continue to run the old version.
    • If the instances are terminated from the failed deployment, Elastic Beanstalk replaces them with instances running the application version from the most recent successful deployment.

Rolling with Additional Batch Deployments

  • Rolling with Additional Batch deployments is helpful when you need to maintain full capacity during deployments.
  • This deployment is similar to Rolling deployments, except they do not do an in-place deployment but a disposable one, launching a new batch of instances prior to taking any instances out of service
  • When the deployment completes, Elastic Beanstalk terminates the additional batch of instances.
  • Rolling with additional batch deployment does not impact the capacity and ensures full capacity during the deployment process.

Immutable Deployments

  • All at Once and Rolling deployment method updates existing instances.
  • If you need to ensure the application source is always deployed to new instances, instead of updating existing instances, the environment can be configured to use immutable updates for deployments.
  • Immutable updates are performed by launching a second Auto Scaling group is launched in the environment and the new version serves traffic alongside the old version until the new instances pass health checks.
  • Immutable deployments can prevent issues caused by partially completed rolling deployments. If the new instances don’t pass health checks, Elastic Beanstalk terminates them, leaving the original instances untouched.

Blue Green Deployments

  • Elastic Beanstalk performs an in-place update when application versions are updated, which may result in the application becoming unavailable to users for a short period of time.
  • Blue Green approach is suitable for deployments that depend on incompatible resource configuration changes or a new version that can’t run alongside the old version.
  • Elastic Beanstalk enables the Blue Green deployment through the Swap Environment URLs feature.
  • Blue Green deployment provides an almost zero downtime solution, where a new version is deployed to a separate environment, and then CNAMEs of the two environments are swapped to redirect traffic to the new version.
  • Blue/green deployments require that the environment runs independently of the production database i.e. not maintained by Elastic Beanstalk if your application uses one. Because if the environment has an RDS DB instance attached to it, the data will not transfer over to the second environment and will be lost if the original environment is terminated
  • Blue Green deployment entails a DNS change; hence, do not terminate the old environment until the DNS changes have been propagated and the old DNS records expire.
  • DNS servers do not necessarily clear old records from their cache based on the time to live (TTL) you set on the DNS records.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. When thinking of AWS Elastic Beanstalk, the ‘Swap Environment URLs’ feature most directly aids in what? [CDOP]
    1. Immutable Rolling Deployments
    2. Mutable Rolling Deployments
    3. Canary Deployments
    4. Blue-Green Deployments (Simply upload the new version of your application and let your deployment service (AWS Elastic Beanstalk, AWS CloudFormation, or AWS OpsWorks) deploy a new version (green). To cut over to the new version, you simply replace the ELB URLs in your DNS records. Elastic Beanstalk has a Swap Environment URLs feature to facilitate a simpler cutover process.)
  2. You need to deploy a new version of your application. You’d prefer to use all new instances if possible, but you cannot have any downtime. You also don’t want to swap any environment URLs. You’re running t2.large instances and you normally need 15 instances to meet capacity. Which deployment method should you use? Choose the correct answer:
    1. Rolling Updates
    2. Blue/Green
    3. Immutable
    4. All at Once
  3. Your team is responsible for an AWS Elastic Beanstalk application. The business requires that you move to a continuous deployment model, releasing updates to the application multiple times per day with zero downtime. What should you do to enable this and still be able to roll back almost immediately in an emergency to the previous version? [CDOP]
    1. Enable rolling updates in the Elastic Beanstalk environment, setting an appropriate pause time for application startup.
    2. Create a second Elastic Beanstalk environment running the new application version, and swap the environment CNAMEs.
    3. Develop the application to poll for a new application version in your code repository; download and install to each running Elastic Beanstalk instance.
    4. Create a second Elastic Beanstalk environment with the new application version, and configure the old environment to redirect clients, using the HTTP 301 response code, to the new environment.

References

AWS Elastic Beanstalk Deployment Options

AWS Elastic Beanstalk

Elastic Beanstalk Environment Tiers

AWS Elastic Beanstalk

  • AWS Elastic Beanstalk helps to quickly deploy and manage applications in the AWS Cloud without having to worry about the infrastructure that runs those applications.
  • reduces management complexity without restricting choice or control.
  • enables automated infrastructure management and code deployment, by simply uploading, for applications and includes
    • Application platform management
    • Capacity provisioning
    • Load Balancing
    • Auto Scaling
    • Code deployment
    • Health Monitoring
  • Elastic Beanstalk automatically launches an environment once an application is uploaded, and creates and configures the AWS resources needed to run the code. After the environment is launched, it can be managed and used to deploy new application versions.
  • AWS resources launched by Elastic Beanstalk are fully accessible i.e. EC2 instances can be SSHed into.
  • provides developers and systems administrators with an easy, fast way to deploy and manage the applications without having to worry about AWS infrastructure.
  • CloudFormation, using templates, is a better option than Elastic Beanstalk if the internal AWS resources to be used are known and fine-grained control is needed.

Elastic Beanstalk Components

Elastic Beanstalk Components

  • Application
    • An Application is a logical collection of components, including environments, versions, and environment configurations.
  • Application Version
    • An application version refers to a specific, labeled iteration of deployable code for a web application.
    • Applications can have many versions and each application version is unique and points to an S3 object.
    • Multiple versions of an Application can be deployed for testing differences and helps to roll back to any version in case of issues.
  • Environment
    • An environment is a version that is deployed onto AWS resources.
    • An environment runs a single application version at a time, but same application version can be deployed across multiple environments.
    • When an environment is created, EB provisions the resources needed to run the specified application version.
  • Environment Configuration
    • An environment configuration identifies a collection of parameters and settings that define how an environment and its associated resources behave
    • When an environment’s configuration settings are updated, EB automatically applies the changes to existing resources or deletes and deploys new resources, depending upon the change
  • Configuration Template
    • A configuration template is a starting point for creating unique environment configurations

Elastic Beanstalk Architecture

Elastic Beanstalk Environment Tiers

  • Elastic Beanstalk environment requires an environment tier, platform, and
    environment type.
  • Environment tier determines whether EB provisions resources to support
    • Web tier – a web application that handles HTTP(S) requests
    • Worker tier – an application that handles background-processing tasks.
  • One environment cannot support two different environment tiers because each requires its own set of resources; a worker environment tier and a web server environment tier each require an Auto Scaling group, but Elastic Beanstalk supports only one Auto Scaling group per environment.

Web Environment Tier

  • An environment tier whose web application processes web requests is known as a web server tier.
  • AWS resources created for a web environment tier include an Elastic Load Balancer, an Auto Scaling group, one or more EC2 instances
  • Every Environment has a CNAME URL pointing to the ELB, aliased in Route 53 to ELB URL.
  • Each EC2 server instance that runs the application uses a container type, which defines the infrastructure topology and software stack.
  • A software component called the host manager (HM) runs on each EC2 server instance and is responsible for
    • Deploying the application
    • Aggregating events and metrics for retrieval via the console, the API, or the command line
    • Generating instance-level events
    • Monitoring the application log files for critical errors
    • Monitoring the application server
    • Patching instance components
    • Rotating your application’s log files and publishing them to S3

Worker Environment Tier

  • An environment tier whose web application runs background jobs is known as a worker tier.
  • AWS resources created for a worker environment tier include an Auto Scaling group, one or more EC2 instances, and an IAM role.
  • For the worker environment tier, Elastic Beanstalk also creates and provisions an SQS queue, if one doesn’t exist.
  • When a worker environment tier is launched, EB installs the necessary support files for the programming language of choice and a daemon on each EC2 instance in the Auto Scaling group reading from the same SQS queue.
  • Daemon is responsible for pulling requests from an SQS queue and then sending the data to the web application running in the worker environment tier that will process those messages.
  • Worker environments support SQS dead letter queues which can be used to store messages that could not be successfully processed. Dead letter queue provides the ability to sideline, isolate and analyze the unsuccessfully processed messages

Elastic Beanstalk with Other AWS Services

  • Elastic Beanstalk supports VPC and launches AWS resources, such as instances, into the VPC
  • Elastic Beanstalk supports IAM and helps you securely control access to your AWS resources.
  • CloudFront can be used to distribute the content in S3 after an Elastic Beanstalk is created and deployed
  • CloudTrail
    • Elastic Beanstalk is integrated with CloudTrail, a service that captures all of the Elastic BeanstalkAPI calls and delivers the log files to a specified S3 bucket.
    • CloudTrail captures API calls from the Elastic Beanstalk console or from your code to the Elastic Beanstalk APIs and helps to determine the request made to Elastic Beanstalk, the source IP address from which the request was made, who made the request, when it was made, etc.
  • RDS
    • EB provides support for running RDS instances in the environment which is ideal for development and testing but not for production.
    • For a production environment, it is not recommended because it ties the lifecycle of the database instance to the lifecycle of the application’s environment. So if the environment is deleted, the RDS instance is deleted as well
    • It is recommended to launch a database instance outside of the environment and configure the application to connect to it outside of the functionality provided by Elastic Beanstalk.
    • Using a database instance external to the environment requires additional security group and connection string configuration, but it also lets the application connect to the database from multiple environments, use database types not supported with integrated databases, perform blue/green deployments, and tear down the environment without affecting the database instance.
  • S3
    • EB creates an S3 bucket named elasticbeanstalk-region-account-id for each region in which environments are created.
    • EB uses the bucket to store application versions, logs, and other supporting files.
    • It applies a bucket policy to buckets it creates to allow environments to write to the bucket and prevent accidental deletion

Elastic Beanstalk Deployment Strategies

Elastic Beanstalk Deployment Methods

  • All at Once
    • performs an in-place deployment on all instances at the same time.
    • is performed on existing instances and would lead to downtime as well as time to roll back changes.
  • Rolling
    • splits the environment instances into batches and deploys the application’s new version on the existing instance one batch at a time, leaving the rest of the environment instances running the old version.
    • waits until all instances in a batch are healthy before moving on to the next batch.
    • reduces downtime as all instances are not updated and if the health checks fail the deployment can be rollback.
  • Rolling with an Additional batch
    • similar to Rolling however it starts the deployment of the application’s new version on a new batch.
    • does not impact the capacity and ensures full capacity during the deployment process.
  • Immutable
    • ensures the application source is always deployed to new instances.
    • prevent issues caused by partially completed rolling deployments.
    • provides minimal downtime and quick rollback.
  • Blue Green
    • suitable for deployments that depend on incompatible resource configuration changes or a new version that can’t run alongside the old version.
    • implemented using the Swap Environment URLs feature that entails a DNS switchover.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An organization is planning to use AWS for their production roll out. The organization wants to implement automation for deployment such that it will automatically create a LAMP stack, download the latest PHP installable from S3 and setup the ELB. Which of the below mentioned AWS services meets the requirement for making an orderly deployment of the software?
    1. AWS Elastic Beanstalk
    2. AWS CloudFront
    3. AWS CloudFormation
    4. AWS DevOps
  2. What does Amazon Elastic Beanstalk provide?
    1. A scalable storage appliance on top of Amazon Web Services.
    2. An application container on top of Amazon Web Services
    3. A service by this name doesn’t exist.
    4. A scalable cluster of EC2 instances
  3. You want to have multiple versions of your application running at the same time, with all versions launched via AWS Elastic Beanstalk. Is this possible?
    1. However if you have 2 AWS accounts this can be done
    2. AWS Elastic Beanstalk is not designed to support multiple running environments
    3. AWS Elastic Beanstalk is designed to support a number of multiple running environments
    4. However AWS Elastic Beanstalk is designed to support only 2 multiple running environments
  4. A .NET application that you manage is running in Elastic Beanstalk. Your developers tell you they will need access to application log files to debug issues that arise. The infrastructure will scale up and down. How can you ensure the developers will be able to access only the log files?
    1. Access the log files directly from Elastic Beanstalk
    2. Enable log file rotation to S3 within the Elastic Beanstalk configuration
    3. Ask your developers to enable log file rotation in the applications web.config file
    4. Connect to each Instance launched by Elastic Beanstalk and create a Windows Scheduled task to rotate the log files to S3
  5. Your team has a tomcat-based Java application you need to deploy into development, test and production environments. After some research, you opt to use Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC .The optimal setup for persistence and security that meets the above requirements would be the following. [PROFESSIONAL]
    1. Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets. (Not optimal for persistence as the RDS is associated with the Elastic Beanstalk lifecycle and would not live independently)
    2. Create your RDS instance separately and add its IP address to your application’s DB connection strings in your code. Alter its security group to allow access to it from hosts within your VPC’s IP address block. (RDS is connected using DNS endpoint only)
    3. Create your RDS instance separately and pass its DNS name to your app’s DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself. (Security group allows instances to access the RDS with new instances launched without any changes)
    4. Create your RDS instance separately and pass its DNS name to your DB connection string as an environment variable. Alter its security group to allow access to it from hosts in your application subnets. (Not optimal for security adding individual hosts)
  6. Your must architect the migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location. What options could you select to migrate the application to AWS? (Choose 2) [PROFESSIONAL]
    1. Create an AWS Elastic Beanstalk application using the custom web server platform. Specify the web server executable and the application project and source files. Enable log file rotation to Amazon Simple Storage Service (S3). (EB does not work with Custom server executable)
    2. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a custom layer. Create custom recipes to install Docker and to deploy your Docker container using the Dockerfile. Create custom recipes to install and configure the application to publish the logs to Amazon CloudWatch Logs (although this is one of the option, the last sentence mentions configure the application to push the logs to S3, which would need changes to application as it needs to use SDK or CLI)
    3. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a Docker layer that uses the Dockerfile. Create custom recipes to install and configure Amazon Kinesis to publish the logs into Amazon CloudWatch. (Kinesis not needed)
    4. Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Use Docker configuration with awslogs and EB with Docker)
    5. Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI. (Use VM Import/Export to create AMI and CloudWatch logs agent to log)
  7. Which of the following groups is AWS Elastic Beanstalk best suited for?
    1. Those who want to deploy and manage their applications within minutes in the AWS cloud.
    2. Those who want to privately store and manage Git repositories in the AWS cloud.
    3. Those who want to automate the deployment of applications to instances and to update the applications as required.
    4. Those who want to model, visualize, and automate the steps required to release software.
  8. When thinking of AWS Elastic Beanstalk’s model, which is true?
    1. Applications have many deployments, deployments have many environments.
    2. Environments have many applications, applications have many deployments.
    3. Applications have many environments, environments have many deployments. (Applications group logical services. Environments belong to Applications, and typically represent different deployment levels (dev, stage, prod, forth). Deployments belong to environments, and are pushes of bundles of code for the environments to run.)
    4. Deployments have many environments, environments have many applications.
  9. If you’re trying to configure an AWS Elastic Beanstalk worker tier for easy debugging if there are problems finishing queue jobs, what should you configure?
    1. Configure Rolling Deployments.
    2. Configure Enhanced Health Reporting
    3. Configure Blue-Green Deployments.
    4. Configure a Dead Letter Queue (Elastic Beanstalk worker environments support SQS dead letter queues, where worker can send messages that for some reason could not be successfully processed. Dead letter queue provides the ability to sideline, isolate and analyze the unsuccessfully processed messages. Refer link)
  10. When thinking of AWS Elastic Beanstalk, which statement is true?
    1. Worker tiers pull jobs from SNS.
    2. Worker tiers pull jobs from HTTP.
    3. Worker tiers pull jobs from JSON.
    4. Worker tiers pull jobs from SQS. (Elastic Beanstalk installs a daemon on each EC2 instance in the Auto Scaling group to process SQS messages in the worker environment. Refer link)
  11. You are building a Ruby on Rails application for internal, non-production use, which uses MySQL as a database. You want developers without very much AWS experience to be able to deploy new code with a single command line push. You also want to set this up as simply as possible. Which tool is ideal for this setup?
    1. AWS CloudFormation
    2. AWS OpsWorks
    3. AWS ELB + EC2 with CLI Push
    4. AWS Elastic Beanstalk
  12. What AWS products and features can be deployed by Elastic Beanstalk? Choose 3 answers.
    1. Auto scaling groups
    2. Route 53 hosted zones
    3. Elastic Load Balancers
    4. RDS Instances
    5. Elastic IP addresses
    6. SQS Queues
  13. AWS Elastic Beanstalk stores your application files and optionally server log files in ____.
    1. Amazon Storage Gateway
    2. Amazon Glacier
    3. Amazon EC2
    4. Amazon S3
  14. When you use the AWS Elastic Beanstalk console to deploy a new application ____.
    1. Need to upload each file separately
    2. Need to create each file and path
    3. Need to upload a source bundle
    4. Need to create each file

References

AWS_Elastic_Beanstalk_Developer_Guide

AWS EFS vs EBS Multi-Attach

AWS EFS vs EBS Multi-Attach

AWS EFS vs EBS Multi-Attach

EFS vs EBS Multi-Attach features

  • Elastic File Store – EFS is a file storage service for use with Amazon compute (EC2, containers, serverless) and on-premises servers. EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently accessible storage for up to thousands of EC2 instances.
  • Elastic Block Store – EBS is a block-level storage service for use with EC2. EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance.
  • Service type
    • Elastic File Store is fully managed by AWS
    • EBS needs to be managed by the user.
  • Accessibility
    • EFS can be accessed concurrently from all AZs in the Region.
    • EBS Multi-Attach can be accessed concurrently from instances within the same AZ.
  • Data Scalability
    • EFS provides unlimited data storage
    • EBS Multi-Attach has limits on the storage it can provide.
  • Instance Scalability
    • EFS can be attached to Tens, hundreds, or even thousands of compute instances.
    • EBS Multi-Attach enabled volumes can be attached to up to 16 Linux instances built on the Nitro System.
  • Supported Instances
    • EFS is compatible with all Linux-based AMIs for EC2,  POSIX file system (~Linux) that has a standard file API
    • Multi-Attach enabled volumes can be attached to up to 16 Linux instances built on the Nitro System that are in the same AZ. Multi-Attach enabled volume can be attached to Windows instances, but the OS does not recognize the data on the volume that is shared between the instances, which can result in data inconsistency.
  • Pricing
    • EFS is priced as per the pay-as-you-use model
    • EBS is priced as per the provisioned capacity

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants to organize the contents of multiple websites in managed file storage. The company must be able to scale the storage based on demand without needing to provision storage. Multiple servers across multiple Availability Zones within a region should be able to access this storage concurrently. Which services should the Solutions Architect recommend?
    1. Amazon S3
    2. Amazon EBS Multi-Attach
    3. Amazon EFS
    4. AWS Storage Gateway – Volume gateway

References

Amazon_EBS & Amazon_EFS

AWS S3 Security

AWS S3 Security

  • AWS S3 Security is a shared responsibility between AWS and the Customer
  • S3 is a fully managed service that is protected by the AWS global network security procedures
  • AWS handles basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery.
  • Security and compliance of S3 are assessed by third-party auditors as part of multiple AWS compliance programs including SOC, PCI DSS, HIPAA, etc.
  • S3 provides several other features to handle security, which are the customers’ responsibility.
  • S3 Encryption supports both data at rest and data in transit encryption.
    • Data in transit encryption can be provided by enabling communication via SSL or using client-side encryption
    • Data at rest encryption can be provided using Server Side or Client Side encryption
  • S3 permissions can be handled using
  • S3 Object Lock helps to store objects using a WORM model and can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
  • S3 Access Points simplify data access for any AWS service or customer application that stores data in S3.
  • S3 Versioning with MFA Delete can be enabled on a bucket to ensure that data in the bucket cannot be accidentally overwritten or deleted.
  • S3 Block Public Access provides controls across an entire AWS Account or at the individual S3 bucket level to ensure that objects never have public access, now and in the future.
  • S3 Access Analyzer monitors the access policies, ensuring that the policies provide only the intended access to your S3 resources.

S3 Encryption

  • S3 allows the protection of data in transit by enabling communication via SSL or using client-side encryption
  • S3 provides data-at-rest encryption using
    • Server-Side Encryption: S3 handles the encryption
      • SSE-S3
        • S3 handles the encryption and decryption using S3 managed keys
      • SSE-KMS
        • S3 handles the encryption and decryption using keys managed through AWS KMS.
      • SSE-C
        • S3 handles the encryption and decryption using keys managed and provided by the Customer.
    • Client Side Encryption: Customer handles the encryption
      • CSE-CMK
        • Customer handles the encryption and decryption using keys managed through AWS KMS.
      • Client-side Master Key
        • Customer handles the encryption and decryption using keys managed by them.

S3 Permissions

Refer blog post @ S3 Permissions

S3 Object Lock

  • S3 Object Lock helps to store objects using a write-once-read-many (WORM) model.
  • can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
  • can help meet regulatory requirements that require WORM storage or add an extra layer of protection against object changes and deletion.
  • can be enabled only for new buckets and works only in versioned buckets.
  • provides two retention modes that apply different levels of protection to the objects
    • Governance mode
      • Users can’t overwrite or delete an object version or alter its lock settings unless they have special permissions.
      • Objects can be protected from being deleted by most users, but some users can be granted permission to alter the retention settings or delete the object if necessary.
      • Can be used to test retention-period settings before creating a compliance-mode retention period.
    • Compliance mode
      • A protected object version can’t be overwritten or deleted by any user, including the root user in the AWS account.
      • Object retention mode can’t be changed, and its retention period can’t be shortened.
      • Object versions can’t be overwritten or deleted for the duration of the retention period.

S3 Access Points

  • S3 access points simplify data access for any AWS service or customer application that stores data in S3.
  • Access points are named network endpoints that are attached to buckets and can be used to perform S3 object operations, such as GetObject and PutObject.
  • Each access point has distinct permissions and network controls that S3 applies for any request that is made through that access point.
  • Each access point enforces a customized access point policy that works in conjunction with the bucket policy, attached to the underlying bucket.
  • An access point can be configured to accept requests only from a VPC to restrict S3 data access to a private network.
  • Custom block public access settings can be configured for each access point.

S3 VPC Gateway Endpoint

  • A VPC endpoint enables connections between a VPC and supported services, without requiring that you use an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
  • VPC is not exposed to the public internet.
  • Gateway Endpoint is a gateway that is a target for a route in your route table used for traffic destined to either S3.

S3 Block Public Access

  • S3 Block Public Access provides controls across an entire AWS Account or at the individual S3 bucket level to ensure that objects never have public access, now and in the future.
  • S3 Block Public Access provides settings for access points, buckets, and accounts to help manage public access to S3 resources.
  • By default, new buckets, access points, and objects don’t allow public access. However, users can modify bucket policies, access point policies, or object permissions to allow public access.
  • S3 Block Public Access settings override these policies and permissions so that public access to these resources can be limited.
  • S3 Block Public Access allows account administrators and bucket owners to easily set up centralized controls to limit public access to their S3 resources that are enforced regardless of how the resources are created.
  • S3 doesn’t support block public access settings on a per-object basis.
  • S3 Block Public Access settings when applied to an account apply to all AWS Regions globally.

S3 Access Analyzer

  • S3 Access Analyzer monitors the access policies, ensuring that the policies provide only the intended access to your S3 resources.
  • S3 Access Analyzer evaluates the bucket access policies and enables you to discover and swiftly remediate buckets with potentially unintended access.

S3 Security Best Practices

S3 Preventative Security Best Practices

  • Ensure S3 buckets use the correct policies and are not publicly accessible
    • Use S3 block public access
    • Identify Bucket policies and ACLs that allow public access
    • Use AWS Trusted Advisor to inspect the S3 implementation.
  • Implement least privilege access
  • Use IAM roles for applications and AWS services that require S3 access
  • Enable Multi-factor authentication (MFA) Delete to help prevent accidental bucket deletions
  • Consider Data at Rest Encryption
  • Enforce Data in Transit Encryption
  • Consider S3 Object Lock to store objects using a “Write Once Read Many” (WORM) model.
  • Enable versioning to easily recover from both unintended user actions and application failures.
  • Consider S3 Cross-Region replication
  • Consider VPC endpoints for S3 access to provide private S3 connectivity and help prevent traffic from potentially traversing the open internet.

S3 Monitoring and Auditing Best Practices

  • Identify and Audit all S3 buckets to have visibility of all the S3 resources to assess their security posture and take action on potential areas of weakness.
  • Implement monitoring using AWS monitoring tools
  • Enable S3 server access logging, which provides detailed records of the requests that are made to a bucket useful for security and access audits
  • Use AWS CloudTrail, which provides a record of actions taken by a user, a role, or an AWS service in S3.
  • Enable AWS Config, which enables you to assess, audit, and evaluate the configurations of the AWS resources
  • Consider using Amazon Macie with S3 to automatically discover, classify, and protect sensitive data in AWS.
  • Monitor AWS security advisories to regularly check security advisories posted in Trusted Advisor for the AWS account.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_S3_Security

AWS Backup

AWS Backup

  • AWS Backup is a fully-managed service that helps centralize and automate data protection across AWS services, in the cloud, and on premises.
  • helps configure backup policies and monitor activity for the AWS resources in one place.
  • helps automate and consolidate backup tasks previously performed service-by-service and removes the need to create custom scripts and manual processes.
  • helps create backup policies called backup plans that help define the backup requirements like frequency, window, retention period, etc.
  • automatically backs up the AWS resources according to the defined backup plan.
  • can apply backup plans to the AWS resources by simply tagging them.
  • stores the periodic backups incrementally which provides benefit from the data protection of frequent backups while minimizing storage costs.

AWS Backup Supported Services

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. For the production account, a SysOps administrator must ensure that all data is backed up daily for all current and future Amazon EC2 instances and Amazon Elastic File System (Amazon EFS) file systems. Backups must be retained for 30 days. Which solution will meet these requirements with the LEAST amount of effort?
    1. Create a backup plan in AWS Backup. Assign resources by resource ID, selecting all existing EC2 and EFS resources that are running in the account. Edit the backup plan daily to include any new resources. Schedule the backup plan to run every day with a lifecycle policy to expire backups after 30 days.
    2. Create a backup plan in AWS Backup. Assign resources by tags. Ensure that all existing EC2 and EFS resources are tagged correctly. Schedule the backup plan to run every day with a lifecycle policy to expire backups after 30 days.
    3. Create a lifecycle policy in Amazon Data Lifecycle Manager (Amazon DLM). Assign all resources by resource ID, selecting all existing EC2 and EFS resources that are running in the account. Edit the lifecycle policy daily to include any new resources. Schedule the lifecycle policy to create snapshots every day with a retention period of 30 days.
    4. Create a lifecycle policy in Amazon Data Lifecycle Manager (Amazon DLM). Assign all resources by tags. Ensure that all existing EC2 and EFS resources are tagged correctly. Schedule the lifecycle policy to create snapshots every day with a retention period of 30 days.

References

AWS_Backup

AWS S3 Object Lock

AWS S3 Object Lock

  • S3 Object Lock helps to store objects using a write-once-read-many (WORM) model.
  • can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
  • can help meet regulatory requirements that require WORM storage or add an extra layer of protection against object changes and deletion.
  • can be enabled only for new buckets. For an existing bucket, you need to contact AWS Support.
  • works only in versioned buckets.
  • Once Object Lock is enabled
    • Object Lock can’t be disabled
    • automatically enables versioning for the bucket
    • versioning can’t be suspended for the bucket.
  • provides two ways to manage object retention.
    • Retention period
      • protects an object version for a fixed amount of time, during which an object remains locked.
      • During this period, the object is WORM-protected and can’t be overwritten or deleted.
      • can be applied on an object version either explicitly or through a bucket default setting.
      • S3 stores a timestamp in the object version’s metadata to indicate when the retention period expires. After the retention period expires, the object version can be overwritten or deleted unless you also placed a legal hold on the object version.
    • Legal hold
      • protects an object version, as a retention period, but it has no expiration date.
      • remains in place until you explicitly remove it.
      • can be freely placed and removed by any user who has the s3:PutObjectLegalHold permission.
      • are independent of retention periods.
    • Retention periods and legal holds apply to individual object versions.
    • Placing a retention period or legal hold on an object protects only the version specified in the request. It doesn’t prevent new versions of the object from being created.
    • An object version can have both a retention period and a legal hold, one but not the other, or neither.
  • provides two retention modes that apply different levels of protection to the objects
    • Governance mode
    • Compliance mode
  • S3 buckets with S3 Object Lock can’t be used as destination buckets for server access logs.
  •  has been assessed by Cohasset Associates for use in environments that are subject to SEC 17a-4, CFTC, and FINRA regulations.

S3 Object Lock – Retention Modes

Governance mode

  • Users can’t overwrite or delete an object version or alter its lock settings unless they have special permissions.
  • Objects can be protected from being deleted by most users, but some users can be granted permission to alter the retention settings or delete the object if necessary.
  • Can be used to test retention-period settings before creating a compliance-mode retention period.
  • To override or remove governance-mode retention settings, a user must have the s3:BypassGovernanceRetention permission and must explicitly include x-amz-bypass-governance-retention:true as a request header.

Compliance mode

  • A protected object version can’t be overwritten or deleted by any user, including the root user in the AWS account.
  • Object retention mode can’t be changed, and its retention period can’t be shortened.
  • Object versions can’t be overwritten or deleted for the duration of the retention period.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company needs to store its accounting records in Amazon S3. No one at the company; including administrative users and root users, should be able to delete the records for an entire 10-year period. The records must be stored with maximum resiliency. Which solution will meet these requirements?
    1. Use an access control policy to deny deletion of the records for a period of 10 years.
    2. Use an IAM policy to deny deletion of the records. After 10 years, change the IAM policy to allow deletion.
    3. Use S3 Object Lock in compliance mode for a period of 10 years.
    4. Use S3 Object Lock in governance mode for a period of 10 years.

References

Amazon_S3_Object_Lock