AWS Developer Tools

AWS DevOps Tools

AWS Developer Tools

  • AWS Developer Tools provide a set of services designed to enable developers and IT operations professionals practicing DevOps to rapidly and safely deliver software.
  • AWS Developer Tools help securely store and version control the application’s source code and automatically build, test, and deploy the application to AWS or the on-premises environment.

AWS DevOps Tools

AWS CodeCommit

  • CodeCommit is a secure, scalable, fully-managed source control service that helps to host secure and highly scalable private Git repositories.
  • eliminates the need to operate your own source control system or worry about scaling its infrastructure.
  • can be used to securely store anything from source code to binaries, and it works seamlessly with your existing Git tools.
  • provide high availability as it is built on highly scalable, redundant, and durable AWS services such as S3 and DynamoDB.
  • is designed for collaborative software development and it manages batches of changes across multiple files, offers parallel branching, and includes version differencing.
  • automatically encrypts the files in transit and at rest.
  • is integrated with AWS Identity and Access Management (IAM), allowing you to assign user-specific permissions to your repositories.
  • supports resource-level permissions at the repository level. Permissions can specify which users can perform which actions including MFA.
  • supports HTTPS or SSH or both communication protocols.
  • supports repository triggers, to send notifications and create HTTP webhooks with SNS or invoke Lambda functions.

AWS CodeBuild

  • AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy.
  • helps provision, manage, and scale the build servers.
  • scales continuously and processes multiple builds concurrently, so the builds are not left waiting in a queue.
  • provides prepackaged build environments or the creation of custom build environments that use your own build tools.
  • supports AWS CodeCommit, S3, GitHub, and GitHub Enterprise and Bitbucket to pull source code for builds.
  • provides security and separation at the infrastructure and execution levels
  • runs the build in fresh environments isolated from other users and discards each build environment upon completion.

AWS CodeDeploy

  • AWS CodeDeploy helps automate code deployments to any instance, including EC2 instances and instances running on-premises.
  • helps to rapidly release new features, avoid downtime during application deployment, and handles the complexity of updating the applications.
  • helps automate software deployments, eliminating the need for error-prone manual operations.
  • scales with the infrastructure and can be used to easily deploy from one instance or thousands.
  • performs a deployment with the following parameters
    • Revision – what to deploy
    • Deployment group – where to deploy
    • Deployment configuration – how to deploy
  • Deployment group is an entity for grouping EC2 instances or Lambda functions in deployment and supports instances by specifying a tag, an Auto Scaling group.
  • AppSpec file provides the instructions and is a configuration file that specifies the files to be copied and scripts to be executed.
  • supports both in-place deployments, where rolling updates are performed, and blue/green deployments.

AWS CodePipeline

  • AWS CodePipeline is a fully managed continuous delivery service that helps automate the release pipelines for fast and reliable application and infrastructure updates.
  • automates the builds, tests, and deploys the code every time there is a code change, based on the defined release process models
  • enables rapid and reliable delivery of features and updates.
  • can be integrated with third-party services such as GitHub or with your own custom plugin.
  • pay per use with no upfront fees or long-term commitments.
  • supports resource-level permissions. Permissions can specify which users can perform what action on a pipeline.
  • CodePipeline concepts

CodePipeline Concepts

  • A Pipeline describes how software changes go through a release process
  • A revision is a change made to the source location defined for the pipeline.
  • Pipeline is a sequence of stages and actions.
  • A stage is a group of one or more actions. A pipeline can have two or more stages.
  • An action is a task performed on a revision.
  • Pipeline actions occur in a specified order, in serial or in parallel, as determined in the stage configuration.
  • Stages are connected by transitions
  • Transitions can be disabled or enabled between stages.

 

  • A pipeline can have multiple revisions flowing through it at the same time.
  • Action acts upon a file or set of files are called artifacts. These artifacts can be worked upon by later actions in the pipeline.

AWS CodeArtifact

  • AWS CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store, publish, and share software packages used in their software development process.
  • CodeArtifact can be configured to automatically fetch software packages and dependencies from public artifact repositories so developers have access to the latest versions.
  • CodeArtifact works with commonly used package managers and build tools like Maven, Gradle, npm, yarn, twine, pip, and NuGet making it easy to integrate into existing development workflows.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which AWS service’s PRIMARY purpose is to provide a fully managed continuous delivery service?
    1. Amazon CodeStar
    2. Amazon CodePipeline
    3. Amazon Cognito
    4. AWS CodeCommit
  2. Which AWS service’s PRIMARY purpose is quickly develop, build, and deploy applications on AWS?
    1. Amazon CodeStar
    2. AWS Command Line Interface (AWS CLI)
    3. Amazon Cognito
    4. AWS CodeCommit
  3. Which AWS service’s PRIMARY purpose is software version control?
    1. Amazon CodeStar
    2. AWS Command Line Interface (AWS CLI)
    3. Amazon Cognito
    4. AWS CodeCommit
  4. Which of the following services could be used to deploy an application to servers running on-premises?
    1. AWS Elastic Beanstalk
    2. AWS CodeDeploy
    3. AWS Batch
    4. AWS X-Ray

References

AWS Simple Queue Service – SQS

AWS Simple Queue Service – SQS

  • Simple Queue Service – SQS is a highly available distributed queue system
  • A queue is a temporary repository for messages awaiting processing and acts as a buffer between the component producer and the consumer
  • is a message queue service used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components.
  • is fully managed and requires no administrative overhead and little configuration
  • offers a reliable, highly-scalable, hosted queue for storing messages in transit between applications.
  • provides fault-tolerant, loosely coupled, flexibility of distributed components of applications to send & receive without requiring each component to be concurrently available
  • helps build distributed applications with decoupled components
  • supports encryption at rest and encryption in transit using the HTTP over SSL (HTTPS) and Transport Layer Security (TLS) protocols for security.
  • provides two types of Queues

SQS Standard Queue

  • Standard queues are the default queue type.
  • Standard queues support at-least-once message delivery. However, occasionally (because of the highly distributed architecture that allows nearly unlimited throughput), more than one copy of a message might be delivered out of order.
  • Standard queues support a nearly unlimited number of API calls per second, per API action (SendMessageReceiveMessage, or DeleteMessage).
  • Standard queues provide best-effort ordering which ensures that messages are generally delivered in the same order as they’re sent.

Refer SQS Standard Queue for detailed information

SQS FIFO Queue

  • FIFO (First-In-First-Out) queues provide messages in order and exactly once delivery.
  • FIFO queues have all the capabilities of the standard queues but are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can’t be tolerated.

Refer SQS FIFO Queue for detailed information

SQS Standard Queues vs SQS FIFO Queues

SQS Standard vs FIFO Queues

SQS Use Cases

  • Work Queues
    • Decouple components of a distributed application that may not all process the same amount of work simultaneously.
  • Buffer and Batch Operations
    • Add scalability and reliability to the architecture and smooth out temporary volume spikes without losing messages or increasing latency
  • Request Offloading
    • Move slow operations off of interactive request paths by enqueueing the request.
  • Fan-out
    • Combine SQS with SNS to send identical copies of a message to multiple queues in parallel for simultaneous processing.
  • Auto Scaling
    • SQS queues can be used to determine the load on an application, and combined with Auto Scaling, the EC2 instances can be scaled in or out, depending on the volume of traffic

How SQS Queues Works

  • SQS allows queues to be created, deleted and messages can be sent and received from it
  • SQS queue retains messages for four days, by default.
  • Queues can be configured to retain messages for 1 minute to 14 days after the message has been sent.
  • SQS can delete a queue without notification if any action hasn’t been performed on it for 30 consecutive days.
  • SQS allows the deletion of the queue with messages in it

SQS Features & Capabilities

  • Visibility timeout defines the period where SQS blocks the visibility of the message and prevents other consuming components from receiving and processing that message.
  • SQS Dead-letter queues – DLQ helps source queues (Standard and FIFO) target messages that can’t be processed (consumed) successfully.
  • DLQ Redrive policy specifies the source queue, the dead-letter queue, and the conditions under which SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.
  • SQS Short and Long polling control how the queues would be polled and Long polling help reduce empty responses.

SQS Buffered Asynchronous Client

  • Amazon SQS Buffered Async Client for Java provides an implementation of the AmazonSQSAsyncClient interface and adds several important features:
    • Automatic batching of multiple SendMessage, DeleteMessage, or ChangeMessageVisibility requests without any required changes to the application
    • Prefetching of messages into a local buffer that allows the application to immediately process messages from SQS without waiting for the messages to be retrieved
  • Working together, automatic batching and prefetching increase the throughput and reduce the latency of the application while reducing the costs by making fewer SQS requests.

SQS Security and reliability

  • SQS stores all message queues and messages within a single, highly-available AWS region with multiple redundant Availability Zones (AZs)
  • SQS supports HTTP over SSL (HTTPS) and Transport Layer Security (TLS) protocols.
  • SQS supports Encryption at Rest. SSE encrypts messages as soon as SQS receives them and decrypts messages only when they are sent to an authorized consumer.
  • SQS also supports resource-based permissions 

SQS Design Patterns

Priority Queue Pattern

SQS Priority Queue Pattern

  1. Use SQS to prepare multiple queues for the individual priority levels.
  2. Place those processes to be executed immediately (job requests) in the high priority queue.
  3. Prepare numbers of batch servers, for processing the job requests of the queues, depending on the priority levels.
  4. Queues have a message “Delayed Send” function, which can be used to delay the time for starting a process.

SQS Job Observer Pattern

Job Observer Pattern - SQS + CloudWatch + Auto Scaling

  1. Enqueue job requests as SQS messages.
  2. Have the batch server dequeue and process messages from SQS.
  3. Set up Auto Scaling to automatically increase or decrease the number of batch servers, using the number of SQS messages, with CloudWatch, as the trigger to do so.

SQS vs Kinesis Data Streams

Kinesis Data Streams vs SQS

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which AWS service can help design architecture to persist in-flight transactions?
    1. Elastic IP Address
    2. SQS
    3. Amazon CloudWatch
    4. Amazon ElastiCache
  2. A company has a workflow that sends video files from their on-premise system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario?
    1. SQS guarantees the order of the messages.
    2. SQS synchronously provides transcoding output.
    3. SQS checks the health of the worker instances.
    4. SQS helps to facilitate horizontal scaling of encoding tasks
  3. Which statement best describes an Amazon SQS use case?
    1. Automate the process of sending an email notification to administrators when the CPU utilization reaches 70% on production servers (Amazon EC2 instances) (CloudWatch + SNS + SES)
    2. Create a video transcoding website where multiple components need to communicate with each other, but can’t all process the same amount of work simultaneously (SQS provides loose coupling)
    3. Coordinate work across distributed web services to process employee’s expense reports (SWF – Steps in order and might need manual steps)
    4. Distribute static web content to end users with low latency across multiple countries (CloudFront + S3)
  4. Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of spot EC2 instances. Files submitted by your premium customers must be transformed with the highest priority. How should you implement such a system?
    1. Use a DynamoDB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level.
    2. Use Route 53 latency based-routing to send high priority tasks to the closest transformation instances.
    3. Use two SQS queues, one for high priority messages, and the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue
    4. Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first.
  5. Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which service should you use?
    1. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.
    2. Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database
    3. Amazon ElastiCache to store the writes until the writes are committed to the database.
    4. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.
  6. A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web application hosted on Amazon Elastic Computer Cloud (EC2). The application has dependencies on an on-premises mainframe database that uses a BASE (Basic Available, Soft state, Eventual consistency) rather than an ACID (Atomicity, Consistency, Isolation, Durability) consistency model. The application is exhibiting undesirable behavior because the database is not able to handle the volume of writes. How can you reduce the load on your on-premises database resources in the most cost-effective way?
    1. Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the onpremises database and a Hadoop cluster on AWS.
    2. Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database
    3. Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database.
    4. Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline.
  7. An organization has created a Queue named “modularqueue” with SQS. The organization is not performing any operations such as SendMessage, ReceiveMessage, DeleteMessage, GetQueueAttributes, SetQueueAttributes, AddPermission, and RemovePermission on the queue. What can happen in this scenario?
    1. AWS SQS sends notification after 15 days for inactivity on queue
    2. AWS SQS can delete queue after 30 days without notification
    3. AWS SQS marks queue inactive after 30 days
    4. AWS SQS notifies the user after 2 weeks and deletes the queue after 3 weeks.
  8. A user is using the AWS SQS to decouple the services. Which of the below mentioned operations is not supported by SQS?
    1. SendMessageBatch
    2. DeleteMessageBatch
    3. CreateQueue
    4. DeleteMessageQueue
  9. A user has created a queue named “awsmodule” with SQS. One of the consumers of queue is down for 3 days and then becomes available. Will that component receive message from queue?
    1. Yes, since SQS by default stores message for 4 days
    2. No, since SQS by default stores message for 1 day only
    3. No, since SQS sends message to consumers who are available that time
    4. Yes, since SQS will not delete message until it is delivered to all consumers
  10. A user has created a queue named “queue2” in US-East region with AWS SQS. The user’s AWS account ID is 123456789012. If the user wants to perform some action on this queue, which of the below Queue URL should he use?
    1. http://sqs.us-east-1.amazonaws.com/123456789012/queue2
    2. http://sqs.amazonaws.com/123456789012/queue2
    3. http://sqs. 123456789012.us-east-1.amazonaws.com/queue2
    4. http://123456789012.sqs.us-east-1.amazonaws.com/queue2
  11. A user has created a queue named “myqueue” with SQS. There are four messages published to queue, which are not received by the consumer yet. If the user tries to delete the queue, what will happen?
    1. A user can never delete a queue manually. AWS deletes it after 30 days of inactivity on queue
    2. It will delete the queue
    3. It will initiate the delete but wait for four days before deleting until all messages are deleted automatically.
    4. I t will ask user to delete the messages first
  12. A user has developed an application, which is required to send the data to a NoSQL database. The user wants to decouple the data sending such that the application keeps processing and sending data but does not wait for an acknowledgement of DB. Which of the below mentioned applications helps in this scenario?
    1. AWS Simple Notification Service
    2. AWS Simple Workflow
    3. AWS Simple Queue Service
    4. AWS Simple Query Service
  13. You are building an online store on AWS that uses SQS to process your customer orders. Your backend system needs those messages in the same sequence the customer orders have been put in. How can you achieve that?
    1. It is not possible to do this with SQS
    2. You can use sequencing information on each message
    3. You can do this with SQS but you also need to use SWF
    4. Messages will arrive in the same order by default
  14. A user has created a photo editing software and hosted it on EC2. The software accepts requests from the user about the photo format and resolution and sends a message to S3 to enhance the picture accordingly. Which of the below mentioned AWS services will help make a scalable software with the AWS infrastructure in this scenario?
    1. AWS Glacier
    2. AWS Elastic Transcoder
    3. AWS Simple Notification Service
    4. AWS Simple Queue Service
  15. Refer to the architecture diagram of a batch processing solution using Simple Queue Service (SQS) to set up a message queue between EC2 instances, which are used as batch processors. Cloud Watch monitors the number of Job requests (queued messages) and an Auto Scaling group adds or deletes batch servers automatically based on parameters set in Cloud Watch alarms. You can use this architecture to implement which of the following features in a cost effective and efficient manner? 
    1. Reduce the overall time for executing jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance in a daisy-chain setup.
    2. Implement fault tolerance against EC2 instance failure since messages would remain in SQS and worn can continue with recovery of EC2 instances implement fault tolerance against SQS failure by backing up messages to S3.
    3. Implement message passing between EC2 instances within a batch by exchanging messages through SOS.
    4. Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness
    5. Handle high priority jobs before lower priority jobs by assigning a priority metadata field to SQS messages.
  16. How does Amazon SQS allow multiple readers to access the same message queue without losing messages or processing them many times?
    1. By identifying a user by his unique id
    2. By using unique cryptography
    3. Amazon SQS queue has a configurable visibility timeout
    4. Multiple readers can’t access the same message queue
  17. A user has created photo editing software and hosted it on EC2. The software accepts requests from the user about the photo format and resolution and sends a message to S3 to enhance the picture accordingly. Which of the below mentioned AWS services will help make a scalable software with the AWS infrastructure in this scenario?
    1. AWS Elastic Transcoder
    2. AWS Simple Notification Service
    3. AWS Simple Queue Service
    4. AWS Glacier
  18. How do you configure SQS to support longer message retention?
    1. Set the MessageRetentionPeriod attribute using the SetQueueAttributes method
    2. Using a Lambda function
    3. You can’t. It is set to 14 days and cannot be changed
    4. You need to request it from AWS
  19. A user has developed an application, which is required to send the data to a NoSQL database. The user wants to decouple the data sending such that the application keeps processing and sending data but does not wait for an acknowledgement of DB. Which of the below mentioned applications helps in this scenario?
    1. AWS Simple Notification Service
    2. AWS Simple Workflow
    3. AWS Simple Query Service
    4. AWS Simple Queue Service
  20. If a message is retrieved from a queue in Amazon SQS, how long is the message inaccessible to other users by default?
    1. 0 seconds
    2. 1 hour
    3. 1 day
    4. forever
    5. 30 seconds
  21. Which of the following statements about SQS is true?
    1. Messages will be delivered exactly once and messages will be delivered in First in, First out order
    2. Messages will be delivered exactly once and message delivery order is indeterminate
    3. Messages will be delivered one or more times and messages will be delivered in First in, First out order
    4. Messages will be delivered one or more times and message delivery order is indeterminate (Before the introduction of FIFO queues)
  22. How long can you keep your Amazon SQS messages in Amazon SQS queues?
    1. From 120 secs up to 4 weeks
    2. From 10 secs up to 7 days
    3. From 60 secs up to 2 weeks
    4. From 30 secs up to 1 week
  23. When a Simple Queue Service message triggers a task that takes 5 minutes to complete, which process below will result in successful processing of the message and remove it from the queue while minimizing the chances of duplicate processing?
    1. Retrieve the message with an increased visibility timeout, process the message, delete the message from the queue
    2. Retrieve the message with an increased visibility timeout, delete the message from the queue, process the message
    3. Retrieve the message with increased DelaySeconds, process the message, delete the message from the queue
    4. Retrieve the message with increased DelaySeconds, delete the message from the queue, process the message
  24. You need to process long-running jobs once and only once. How might you do this?
    1. Use an SNS queue and set the visibility timeout to long enough for jobs to process.
    2. Use an SQS queue and set the reprocessing timeout to long enough for jobs to process.
    3. Use an SQS queue and set the visibility timeout to long enough for jobs to process.
    4. Use an SNS queue and set the reprocessing timeout to long enough for jobs to process.
  25. You are getting a lot of empty receive requests when using Amazon SQS. This is making a lot of unnecessary network load on your instances. What can you do to reduce this load?
    1. Subscribe your queue to an SNS topic instead.
    2. Use as long of a poll as possible, instead of short polls. (Refer link)
    3. Alter your visibility timeout to be shorter.
    4. Use <code>sqsd</code> on your EC2 instances.
  26. You have an asynchronous processing application using an Auto Scaling Group and an SQS Queue. The Auto Scaling Group scales according to the depth of the job queue. The completion velocity of the jobs has gone down, the Auto Scaling Group size has maxed out, but the inbound job velocity did not increase. What is a possible issue?
    1. Some of the new jobs coming in are malformed and unprocessable. (As other options would cause the job to stop processing completely, the only reasonable option seems that some of the recent messages must be malformed and unprocessable)
    2. The routing tables changed and none of the workers can process events anymore. (If changed, none of the jobs would be processed)
    3. Someone changed the IAM Role Policy on the instances in the worker group and broke permissions to access the queue. (If IAM role changed no jobs would be processed)
    4. The scaling metric is not functioning correctly. (scaling metric did work fine as the autoscaling caused the instances to increase)
  27. Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number of empty responses?
    1. Set the imaging queue visibility Timeout attribute to 20 seconds
    2. Set the Imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds (Long polling. Refer link)
    3. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds
    4. Set the DelaySeconds parameter of a message to 20 seconds

References

AWS Lambda Functions

AWS Lambda Functions

  • Each function has associated configuration information, such as its name, description, runtime, entry point, and resource requirements
  • Lambda functions should be designed as stateless
    • to allow launching of as many copies of the function as needed as per the demand.
    • Local file system access, child processes, and similar artifacts may not extend beyond the lifetime of the request
    • The state can be maintained externally in DynamoDB or S3
  • Lambda Execution role can be assigned to the function to grant permission to access other resources.
  • Functions have the following restrictions
    • Inbound network connections are blocked
    • Outbound connections only TCP/IP sockets are supported
    • ptrace (debugging) system calls are blocked
    • TCP port 25 traffic is also blocked as an anti-spam measure.
  • Lambda may choose to retain an instance of the function and reuse it to serve a subsequent request, rather than creating a new copy.
  • Lambda Layers provide a convenient way to package libraries and other dependencies that you can use with your Lambda functions.
  • Function versions can be used to manage the deployment of the functions.
  • Function Alias supports creating aliases, which are mutable, for each function version.
  • Functions have the following limits
    • RAM – 128 MB to 10,240 MB (10 GB)
    • CPU is linked to RAM and cannot be set manually.
      • 2 vCPUs = 1769 MB RAM
      • 6 vCPUs = 10240 MB RAM
    • Timeout – 900 Secs or 15 mins
    • /tmp storage between 512 MB and 10,240 MB
    • Deployment Package – 50 MB (zipped), 250 MB (unzipped) including layers
    • Concurrent Executions – 1000 (soft limit)
    • Container Image Size – 10 GB
    • Invocation Payload (request/response) – 6 MB (sync), 256 KB (async)
  • Functions are automatically monitored, and real-time metrics are reported through CloudWatch, including total requests, latency, error rates, and throttled requests.
  • Lambda automatically integrates with CloudWatch logs, creating a log group for each function and providing basic application lifecycle event log entries, including logging the resources consumed for each use of that function.
  • Functions support code written in
    • Node.js (JavaScript)
    • Python
    • Ruby
    • Java (Java 8 compatible)
    • C# (.NET Core)
    • Go
    • Custom runtime
  • Container images are also supported.
  • Failure Handling
    • For S3 bucket notifications and custom events, Lambda will attempt execution of the function three times in the event of an error condition in the code or if a service or resource limit is exceeded.
    • For ordered event sources that Lambda polls, e.g. DynamoDB Streams and Kinesis streams, it will continue attempting execution in the event of a developer code error until the data expires.
    • Kinesis and DynamoDB Streams retain data for a minimum of 24 hours
    • Dead Letter Queues (SNS or SQS) can be configured for events to be placed, once the retry policy for asynchronous invocations is exceeded

Lambda Layers

  • Lambda Layers provide a convenient way to package libraries and other dependencies that you can use with your Lambda functions.
  • Layers help reduce the size of uploaded deployment archives and make it faster to deploy your code.
  • A layer is a .zip file archive that can contain additional code or data.
  • A layer can contain libraries, a custom runtime, data, or configuration files.
  • Layers promote reusability, code sharing, and separation of responsibilities so that you can iterate faster on writing business logic.
  • Layers can be used only with Lambda functions deployed as a .zip file archive.
  • For functions defined as a container image, the preferred runtime and all code dependencies can be packaged when the container image is created.
  • A Layer can be created by bundling the content into a .zip file archive and uploading the .zip file archive to the layer from S3 or the local machine.
  • Lambda extracts the layer contents into the /opt directory when setting up the execution environment for the function.

Environment Variables

  • Environment variables can be used to adjust the function’s behavior without updating the code.
  • An environment variable is a pair of strings that are stored in a function’s version-specific configuration.
  • The Lambda runtime makes environment variables available to the code and sets additional environment variables that contain information about the function and invocation request.
  • Environment variables are not evaluated prior to the function invocation.
  • Lambda stores environment variables securely by encrypting them at rest.
  • AWS recommends using Secrets Manager instead of storing secrets in the environment variables.

Lambda Function Limits

  • RAM – 128 MB to 10,240 MB (10 GB)
  • CPU is linked to RAM and cannot be set manually.
    • 2 vCPUs = 1769 MB RAM
    • 6 vCPUs = 10240 MB RAM
  • Timeout – 900 Secs or 15 mins
  • /tmp storage between 512 MB and 10,240 MB
  • Deployment Package – 50 MB (zipped), 250 MB (unzipped) including layers
  • Concurrent Executions – 1000 (soft limit)
  • Container Image Size – 10 GB
  • Invocation Payload (request/response) – 6 MB (sync), 256 KB (async)

Lambda Functions Versioning

  • Function versions can be used to manage the deployment of the functions.
  • Each function has a single, current version of the code.
  • Lambda creates a new version of the function each time it’s published.
  • A function version includes the following information:
    • The function code and all associated dependencies.
    • The Lambda runtime that invokes the function.
    • All the function settings, including the environment variables.
    • A unique Amazon Resource Name (ARN) to identify the specific version of the function.
  • Function versions are immutable, however, support Aliases which are mutable.

Lambda Functions Alias

  • Lambda supports creating aliases, which are mutable, for each function version.
  • Alias is a pointer to a specific function version, with a unique ARN.
  • Each alias maintains an ARN for a function version to which it points.
  • An alias can only point to a function version, not to another alias
  • Alias helps in rolling out new changes or rolling back to old versions
  • Alias supports routing configuration to point to a maximum of two Lambda function versions. It can be used for canary testing to send a portion of traffic to a second function version.

References

AWS_Lambda_Functions

Amazon EventBridge

EventBridge Components

Amazon EventBridge

  • Amazon EventBridge is a serverless event bus service that makes it easy to connect applications with data from a variety of sources.
  • enables building loosely coupled and distributed event-driven architectures.
  • provides a simple and consistent way to ingest, filter, transform, and deliver events so you can build new applications quickly.
  • delivers a stream of real-time data from applications, SaaS applications, and AWS services, and routes that data to targets such as AWS Lambda.
  • supports routing rules to determine where to send the data to build application architectures that react in real-time to all of the data sources.
  • supports event buses for many-to-many routing of events between event-driven services.
  • provides Pipes for point-to-point integrations between these sources and targets, with support for advanced transformations and enrichment.
  • provides schemas, which define the structure of events, for all events that are generated by AWS services.

EventBridge Components

EventBridge Components

  • EventBridge receives an event on an event bus and applies a rule to route the event to a target.
  • Event sources
    • An event source is used to ingest events from AWS Services, applications, or SaaS partners.
  • Events
    • An event is a real-time indicator of a change in the environment such as an AWS environment, a SaaS partner service or application, or one of your applications or services.
    • All events are associated with an event bus.
    • Events are represented as JSON objects and they all have a similar structure and the same top-level fields.
    • Contents of the detail top-level field are different depending on which service generated the event and what the event is.
    • An event pattern defines the event structure and the fields that a rule matches.
  • Event buses
    • Event bus is a pipeline that receives events.
    • Each account has a default event bus that receives events from AWS services. Custom event buses can be created to send or receive events from a different account or Region.
  • Rules
    • Rules associated with the event bus evaluate events as they arrive.
    • Rules match incoming events to targets based either on the structure of the event, called an event pattern, or on a schedule.
    • Each rule checks whether an event matches the rule’s criteria.
    • A single rule can send an event to multiple targets, which then run in parallel.
    • Rules that are based on a schedule perform an action at regular intervals.
  • Targets
    • A target is a resource or endpoint that EventBridge sends an event to when the event matches the event pattern defined for a rule.
    • The rule processes the event data and sends the relevant information to the target.
    • EventBridge needs permission to access the target resource to be able to deliver event data to the target.
    • Up to five targets can be defined for each rule.
  • EventBridge allows events to be archived and replayed later.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants to be alerted through email when IAM CreateUser API calls are made within its AWS account. Which combination of actions should a SysOps administrator take to meet this requirement? (Choose two.)
    1. Create an Amazon EventBridge (Amazon CloudWatch Events) rule with AWS CloudTrail as the event source and IAM CreateUser as the specific API call for the event pattern.
    2. Create an Amazon EventBridge (Amazon CloudWatch Events) rule with Amazon CloudSearch as the event source and IAM CreateUser as the specific API call for the event pattern.
    3. Create an Amazon EventBridge (Amazon CloudWatch Events) rule with AWS IAM Access Analyzer as the event source and IAM CreateUser as the specific API call for the event pattern.
    4. Use an Amazon Simple Notification Service (Amazon SNS) topic as an event target with an email subscription.
    5. Use an Amazon Simple Email Service (Amazon SES) notification as an event target with an email subscription.

References

Amazon_EventBridge

Amazon Cognito

Amazon Cognito

Amazon Cognito

  • Amazon Cognito provides authentication, authorization, and user management for the web and mobile apps.
  • Users can sign in directly with a username and password, or through a third party such as Facebook, Amazon, Google, or Apple.
  • Cognito has two main components.
    • User pools are user directories that provide sign-up and sign-in options for the app users.
    • Identity pools enable you to grant the users access to other AWS services.
  • Cognito Sync helps synchronize data across a user’s devices so that their app experience remains consistent when they switch between devices or upgrade to a new device.

Amazon Cognito

Cognito User Pools

  • User pools are for authentication (identity verification).
  • User pools are user directories that provide sign-up and sign-in options for web and mobile app users.
  • User pool helps users sign in to the web or mobile app, or federate through a third-party identity provider (IdP).
  • All user pool members have a directory profile, whether the users sign in directly or through a third party, that can be accessed through an SDK.
  • After successfully authenticating a user, Cognito issues JSON web tokens (JWT) that can be used to secure and authorize access to your own APIs, or exchange for AWS credentials.
  • User pools provide:
    • Sign-up and sign-in services.
    • A built-in, customizable web UI to sign in users.
    • Social sign-in with Facebook, Google, Apple, or Amazon, and through SAML and OIDC identity providers from the user pool.
    • User directory management and user profiles.
    • Security features such as MFA, checks for compromised credentials, account takeover protection, and phone and email verification.
    • Customized workflows and user migration through Lambda triggers.
  • Use cases
    • Design sign-up and sign-in webpages for your app.
    • Access and manage user data.
    • Track user device, location, and IP address, and adapt to sign-in requests of different risk levels.
    • Use a custom authentication flow for your app.

Cognito Identity Pools

  • Identity pools are for authorization (access control).
  • Identity pool helps users obtain temporary AWS credentials to access AWS services,
  • Identity pools support both authenticated and unauthenticated identities.
  • Unauthenticated identities typically belong to guest users.
  • Authenticated identities belong to users who are authenticated by any supported identity provider:
    • Cognito user pools
    • Social sign-in with Facebook, Google, Login with Amazon, and Sign in with Apple
    • OpenID Connect (OIDC) providers
    • SAML identity providers
    • Developer authenticated identities
  • Each identity type has a role with policies assigned that determines the AWS services that the role can access.
  • Identity Pools do not store any user profiles.
  • Use cases
    • Give your users access to AWS resources, such as S3 and DynamoDB.
    • Generate temporary AWS credentials for unauthenticated users.

Cognito Sync

  • Cognito Sync is an AWS service and client library that makes it possible to sync application-related user data across devices.
  • Cognito Sync can synchronize user profile data across mobile devices and the web without using your own backend.
  • The client libraries cache data locally so that the app can read and write data regardless of device connectivity status.
  • When the device is online, the data can be synchronized.
  • If you set up push sync, other devices can be notified immediately that an update is available.
  • Sync store is a key/value pair store linked to an identity.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is building a social media mobile and web app for consumers. They want the application to be available on all desktop and mobile platforms, while being able to maintain user preferences across platforms. How can they implement the authentication to support the requirement?
    1. Use AWS Cognito
    2. Use AWS Glue
    3. Use Web Identity Federation
    4. Use AWS IAM
  2. A Developer needs to create an application that supports Security Assertion Markup Language (SAML) and Facebook authentication. It must also allow access to AWS services, such as Amazon DynamoDB. Which AWS service or feature will meet these requirements with the LEAST amount of additional coding?
    1. AWS AppSync
    2. Amazon Cognito identity pools
    3. Amazon Cognito user pools
    4. Amazon Lambda@Edge
  3. A development team is designing a mobile app that requires multi-factor authentication. Which steps should be taken to achieve this? (Choose two.)
    1. Use Amazon Cognito to create a user pool and create users in the user pool.
    2. Send multi-factor authentication text codes to users with the Amazon SNS Publish API call in the app code.
    3. Enable multi-factor authentication for the Amazon Cognito user pool.
    4. Use AWS IAM to create IAM users.
    5. Enable multi-factor authentication for the users created in AWS IAM.
  4. A Developer is building a mobile application and needs any update to user profile data to be pushed to all devices accessing the specific identity. The Developer does not want to manage a back end to maintain the user profile data. What is the MOST efficient way for the Developer to achieve these requirements using Amazon Cognito?
    1. Use Cognito federated identities.
    2. Use a Cognito user pool.
    3. Use Cognito Sync.
    4. Use Cognito events.

References

Amazon_Cognito

Amazon Elastic Container Registry – ECR

Elastic Container Registry – ECR

  • Amazon Elastic Container Registry – ECR is a fully managed, secure, scalable, reliable container image registry service.
  • makes it easy for developers to share and deploy container images and artifacts.
  • is integrated with ECS,  EKS, Fargate, and Lambda, simplifying the development to production workflow.
  • eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure.
  • hosts the images, using S3, in a highly available and scalable architecture, allowing you to deploy containers for the applications reliably.
  • is a Regional service with the ability to push/pull images to the same AWS Region. Images can be pulled between Regions or out to the internet with additional latency and data transfer costs.
  • supports cross-region and cross-account image replication.
  • integrates with AWS IAM and supports resource-based permissions
  • supports public and private repositories.
  • automatically encrypts images at rest using S3 server-side encryption or AWS KMS encryption and transfers the container images over HTTPS.
  • supports tools and docker CLI to push, pull and manage Docker images, Open Container Initiative (OCI) images, and OCI-compatible artifacts.
  • automatically scans the container images for a broad range of operating system vulnerabilities.
  • supports ECR Lifecycle policies that help with managing the lifecycle of the images in the repositories.

Elastic Container Registry - ECR

ECR Components

  • Registry
    •  ECR private registry hosts the container images in a highly available and scalable architecture.
    • A default ECR private registry is provided to each AWS account.
    • One or more repositories can be created in the registry and images stored in them.
    • Repositories can be configured for either cross-Region or cross-account replication.
    • Private Registry is enabled for basic scanning, by default.
    • Enhanced scanning can be enabled which provides an automated, continuous scanning mode that scans for both operating system and programming language package vulnerabilities.
  • Repository
    • An ECR repository contains Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts.
    • Repositories can be controlled with both user access policies and individual repository policies.
  • Image
    • Images can be pushed and pulled to the repositories.
    • Images can be used locally on the development system, or in ECS task definitions and EKS pod specifications
  • Repository policy
    • Repository policies are resource-based policies that can help control access to the repositories and the images within them.
    • Repository policies are a subset of IAM policies that are scoped for, and specifically used for, controlling access to individual ECR repositories.
    • A user or role only needs to be allowed permission for an action through either a repository policy or an IAM policy but not both for the action to be allowed.
    • Resource-based policies also help grant the usage permission to other accounts on a per-resource basis.
  • Authorization token
    • A client must authenticate to the registries as an AWS user before they can push and pull images.
    • An authentication token is used to access any ECR registry that the IAM principal has access to and is valid for 12 hours.
    • Authorization token’s permission scope matches that of the IAM principal used to retrieve the authentication token.

ECR with VPC Endpoints

  • ECR can be configured to use an Interface VPC endpoint, that enables you to privately access Amazon ECR APIs through private IP addresses.
  • AWS PrivateLink restricts all network traffic between the VPC and ECR to the Amazon network. You don’t need an internet gateway, a NAT device, or a virtual private gateway.
  • VPC endpoints currently don’t support cross-Region requests.
  • VPC endpoints currently don’t support ECR Public repositories.
  • VPC endpoints only support AWS provided DNS through Route 53.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is using Amazon Elastic Container Service (Amazon ECS) to run its container-based application on AWS. The company needs to ensure that the container images contain no severe vulnerabilities. Which solution will meet these requirements with the LEAST management overhead?
    1. Pull images from the public container registry. Publish the images to Amazon ECR repositories with scan on push configured.
    2. Pull images from the public container registry. Publish the images to a private container registry hosted on Amazon EC2 instances. Deploy host-based container scanning tools to EC2 instances that run ECS.
    3. Pull images from the public container registry. Publish the images to Amazon ECR repositories with scan on push configured.
    4. Pull images from the public container registry. Publish the images to AWS CodeArtifact repositories in a centralized AWS account.

References

Amazon_Elastic_Container_Registry_ECR

AWS Global vs Regional vs AZ resources

AWS Global, Regional, AZ resource Availability

  • AWS provides a lot of services and these services are either Global, Regional, or Availability Zone specific and cannot be accessed outside.
  • Most of the AWS-managed services are regional-based services with few exceptions being Global (e.g. IAM, Route53, CloudFront, etc) or AZ bound.

Global vs Regional vs AZ Resource locations

AWS Global vs Regional vs AZ

AWS Networking Services

  • Virtual Private Cloud
    • VPC – Regional
      • VPCs are created within a region
    • Subnet – Availability Zone
      • A subnet can span only a single Availability Zone
    • Security groups – Regional
      • A security group is tied to a region and can be assigned only to instances in the same region.
    • VPC Endpoints – Regional
      • VPC Gateway & Interface Endpoints cannot be created between a VPC and an AWS service in a different region.
    • VPC PeeringRegional
      • VPC Peering can be performed across VPC in the same account of different AWS accounts but only within the same region. They cannot span across regions
      • VPC Peering can now span inter-region
    • Elastic IP Address – Regional
      • Elastic IP addresses created within the region can be assigned to instances within the region only.
    • Elastic Network Interface – Availability Zone
  • Route 53Global
    • Route53 services are offered at AWS edge locations and are global
  • CloudFrontGlobal
    • CloudFront is the global content delivery network (CDN) services are offered at AWS edge locations
  • ELB, ALB, NLB, GWLB – Regional
    • Elastic Load Balancer distributes traffic across instances in multiple Availability Zones in the same region
    • Use Route 53 to route traffic to load balancers across regions.
  • Direct Connect Gateway – Global
    • is a globally available resource that can be created in any Region and accessed from all other Regions.
  • Transit Gateway – Regional
    • is a Regional resource and can connect VPCs within the same AWS Region.
    • Transit Gateway Peering can be used to attach TGWs across regions.
  • AWS Global Accelerator – Global
    • is a global service that supports endpoints in multiple AWS Regions.

AWS Compute Services

  • EC2
    • Resource Identifiers – Regional
      • Each resource identifier, such as an AMI ID, instance ID, EBS volume ID, or EBS snapshot ID, is tied to its region and can be used only in the region where you created the resource.
    • Instances – Availability Zone
      • An instance is tied to the Availability Zones in which you launched it. However, note that its instance ID is tied to the region.
    • EBS Volumes – Availability Zone
      • Amazon EBS volume is tied to its Availability Zone and can be attached only to instances in the same Availability Zone.
    • EBS Snapshot – Regional
      • An EBS snapshot is tied to its region and can only be used to create volumes in the same region and has to be copied from one region to another if needed.
    • AMIs – Regional
      • AMI provides templates to launch EC2 instances
      • AMI is tied to the Region where its files are located with Amazon S3. For using AMI in different regions, the AMI can be copied to other regions
    • Auto Scaling – Regional
      • Auto Scaling spans across multiple Availability Zones within the same region but cannot span across regions
  • Cluster Placement GroupsAvailability Zone
    • Cluster Placement groups can span across Instances within the same Availability Zones
  • ECSRegional
  • ECRRegional
    • Images can be pushed/pulled within the same AWS Region.
    • Images can also be pulled between Regions or out to the internet with additional latency and data transfer costs.

AWS Storage Services

  • S3 – Global but Data is Regional
    • S3 buckets are created within the selected region
    • Objects stored are replicated across Availability Zones to provide high durability but are not cross-region replicated unless done explicitly.
    • S3 cross-region replication can be used to replicate data across regions.
  • DynamoDB – Regional
    • All data objects are stored within the same region and replicated across multiple Availability Zones in the same region
    • Data objects can be explicitly replicated across regions using cross-region replication
  • DynamoDB Global Tables – Across Regions
    • is a new multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads
  • Storage Gateway – Regional
    • AWS Storage Gateway stores volume, snapshot, and tape data in the AWS region in which the gateway is activated

AWS Identity & Security Services

  • Identity Access Management – IAM
    • Users, Groups, Roles, Accounts – Global
      • Same AWS accounts, users, groups, and roles can be used in all regions
    • Key Pairs – Global or Regional
      • EC2 created key pairs are specific to the region
      • RSA key pair can be created and uploaded that can be used in all regions
  • Web Access Firewall – WAFGlobal
    • protect web applications from common web exploits and is offered at AWS edge locations globally.
  • AWS GuardDuty – Regional
    • findings remain in the same Regions where the underlying data was generated.
  • Amazon Detective – Regional
  • Amazon Inspector – Regional
  • Amazon Macie – Regional
    • must be enabled on a region-by-region basis and helps view findings across all the accounts within each Region.
    • verifies that all data analyzed is regionally based and doesn’t cross AWS regional boundaries.
  • AWS Security Hub – Regional.
    • supports cross-region aggregation of findings via the designation of an aggregator region.
  • AWS Migration Hub – Regional.
    • runs in a single home region, however, can collect data from all regions

AWS Management & Governance Tools

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not need to be recreated in the second region? (Choose 2 answers)
    1. Route 53 Record Sets
    2. IAM Roles
    3. Elastic IP Addresses (EIP) (are specific to a region)
    4. EC2 Key Pairs (are specific to a region)
    5. Launch configurations
    6. Security Groups (are specific to a region)
  2. When using the following AWS services, which should be implemented in multiple Availability Zones for high availability solutions? Choose 2 answers
    1. Amazon DynamoDB (already replicates across AZs)
    2. Amazon Elastic Compute Cloud (EC2)
    3. Amazon Elastic Load Balancing
    4. Amazon Simple Notification Service (SNS) (Global Managed Service)
    5. Amazon Simple Storage Service (S3) (Global Managed Service)
  3. What is the scope of an EBS volume?
    1. VPC
    2. Region
    3. Placement Group
    4. Availability Zone
  4. What is the scope of AWS IAM?
    1. Global (IAM resources are all global; there is not regional constraint)
    2. Availability Zone
    3. Region
    4. Placement Group
  5. What is the scope of an EC2 EIP?
    1. Placement Group
    2. Availability Zone
    3. Region (An Elastic IP address is tied to a region and can be associated only with an instance in the same region. Refer link)
    4. VPC
  6. What is the scope of an EC2 security group?
    1. Availability Zone
    2. Placement Group
    3. Region (A security group is tied to a region and can be assigned only to instances in the same region)
    4. VPC

References

AWS Resource-based Policies

AWS Resource-based Policies

  • Resource-based policies allow attaching a policy directly to the resource you want to share, instead of using a role as a proxy.
  • Resource-based policies allow granting usage permission to other AWS accounts or organizations on a per-resource basis.
  • Resource-based policy specifies the Principal, in the form of a list of AWS account ID numbers, can access that resource and what they can access.
  • Using cross-account access with a resource-based policy, the User still works in the trusted account and does not have to give up their permissions in place of the role permissions.
  • Users can work on the resources from both accounts at the same time and this can be useful for scenarios e.g. copying objects from one bucket to the other bucket in a different AWS account.
  • Resources that you want to share are limited to resources that support resource-based policies
  • Resource-based policies need the trusted account to create users with permissions to be able to access the resources from the trusted account.
  • Only permissions equivalent to, or less than, the permissions granted to the account by the resource owning account can be delegated.

S3 Bucket Policy

  • S3 Bucket policy can be used to grant cross-account access to other AWS accounts or IAM users in other accounts for the bucket and objects in it.
  • Bucket policies provide centralized, access control to buckets and objects based on a variety of conditions, including S3 operations, requesters, resources, and aspects of the request (e.g. IP address).
  • Permissions attached to a bucket apply to all of the objects in that bucket created and owned by the bucket owner
  • Policies can either add or deny permissions across all (or a subset) of objects within a bucket
  • Only the bucket owner is allowed to associate a policy with a bucket
  • Bucket policies can cater to multiple use cases
    • Granting permissions to multiple accounts with added conditions
    • Granting read-only permission to an anonymous user
    • Limiting access to specific IP addresses
    • Restricting access to a specific HTTP referer
    • Restricting access to a specific HTTP header for e.g. to enforce encryption
    • Granting permission to a CloudFront OAI
    • Adding a bucket policy to require MFA
    • Granting cross-account permissions to upload objects while ensuring the bucket owner has full control
    • Granting permissions for S3 inventory and Amazon S3 analytics
    • Granting permissions for S3 Storage Lens

Glacier Vault Policy

  • S3 Glacier vault access policy is a resource-based policy that can be used to manage permissions to the vault.
  • A Vault Lock policy is a Vault Access policy that can be locked. After you lock a Vault Lock policy, the policy can’t be changed. You can use a Vault Lock Policy to enforce compliance controls.

KMS Key Policy

  • KMS Key Policy helps determine who can use and manage those keys and is a primary mechanism for controlling access to a key.
  • KMS Key Policy can be used alone to control access to the keys.
  • A KMS key policy MUST be used, either alone or in combination with IAM policies or grants to allow access to a KMS CMK.
  • IAM policies by themselves are not sufficient to allow access to keys, though they can be used in combination with a key policy.
  • IAM user who creates a KMS key is not considered to be the key owner and they don’t automatically have permission to use or manage the KMS key that they created.

API Gateway Resource Policy

  • API Gateway resource policies are attached to an API to control whether a specified principal (typically an IAM role or group) can invoke the API.
  • API Gateway resource policies can be used to allow the API to be securely invoked by:
    • Users from a specified AWS account.
    • Specified source IP address ranges or CIDR blocks.
    • Specified virtual private clouds (VPCs) or VPC endpoints (in any account).
  • Resource policies can be used for all API endpoint types in API Gateway: private, edge-optimized, and Regional.

Lambda Function Policy

  • Lambda supports resource-based permissions policies for Lambda functions and layers.
  • Resource-based policy can be used to allow an AWS service to invoke the function on your behalf.
  • Resource-based policies apply to a single function, version, alias, or layer version.

EFS File System Policy

  • EFS supports IAM resource policy using file system policy.
  • EFS evaluates file system policy, along with any identity-based IAM policies to determine the appropriate file system access permissions to grant.
  • An “allow” permission on an action in either an IAM identity policy or a file system resource policy allows access for that action.

ECR Repository policy

    • Repository policies are resource-based policies that can help control access to the repositories and the images within them.
    • Repository policies are a subset of IAM policies that are scoped for, and specifically used for, controlling access to individual ECR repositories.
    • A user or role only needs to be allowed permission for an action through either a repository policy or an IAM policy but not both for the action to be allowed.
    • Resource-based policies also help grant the usage permission to other accounts on a per-resource basis.

SNS Policy

  • SNS policy can be used with a particular topic to restrict who can work with that topic e.g, who can publish messages to it, subscribe to it, etc.
  • SNS policies can grant access to other AWS accounts, or to users within your own AWS account.

SQS Policy

  • SQS policy system lets you grant permission to other AWS Accounts, whereas IAM doesn’t.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS Application Auto Scaling

AWS Application Auto Scaling

  • Application Auto Scaling is a web service for developers and system administrators who need a solution for automatically scaling their scalable resources for individual AWS services beyond EC2.

DynamoDB Auto Scaling

  • DynamoDB tables and global secondary indexes can be scaled using target tracking scaling policies and scheduled scaling.
  • DynamoDB Auto Scaling helps dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
  • DynamoDB Auto Scaling enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling.
  • When the workload decreases, Application Auto Scaling decreases the throughput so that you don’t pay for unused provisioned capacity.

DynamoDB Auto Scaling

Aurora Auto Scaling

  • Aurora DB clusters can be scaled using target tracking scaling policies, step scaling policies, and scheduled scaling.
  • Aurora Auto Scaling dynamically adjusts the number of Aurora Replicas provisioned for an Aurora DB cluster using single-master replication.
  • Aurora Auto Scaling helps add read replicas with min and max replica count based on scaling CloudWatch CPU or connections metrics condition
  • Aurora Auto Scaling enables the Aurora DB cluster to handle sudden increases in connectivity or workload.
  • As the workload decreases, Aurora Auto Scaling removes unnecessary Aurora Replicas so that you don’t pay for unused provisioned DB instances.

Lambda Auto Scaling

  • AWS Lambda provisioned concurrency can be scaled using target tracking scaling policies and scheduled scaling.

EC2 Auto Scaling

  • EC2 Auto Scaling ensures a correct number of EC2 instances are always running to handle the load of the application.
  • Auto Scaling helps
    • to achieve better fault tolerance, better availability, and cost management.
    • helps specify scaling policies that can be used to launch and terminate EC2 instances to handle any increase or decrease in demand.
  • Auto Scaling attempts to distribute instances evenly between the AZs that are enabled for the Auto Scaling group.
  • Auto Scaling does this by attempting to launch new instances in the AZ with the fewest instances. If the attempt fails, it attempts to launch the instances in another AZ until it succeeds.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

AWS_Application_Auto_Scaling

DynamoDB Table Classes

DynamoDB Table Classes

  • DynamoDB table classes are designed to help you optimize for cost.
  • DynamoDB currently supports two table classes
    • DynamoDB Standard table class is the default, and is recommended for the vast majority of workloads.
    • DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class which is optimized for tables where storage is the dominant cost. e.g, tables that store infrequently accessed data, such as logs, old social media posts, e-commerce order history, and past gaming achievements
  • Every DynamoDB table is associated with a table class.
  • All secondary indexes associated with the table use the same table class.
  • DynamoDB table class can be
    • set when creating the table (DynamoDB Standard by default) or
    • updating the table class of an existing table using the AWS Management Console, AWS CLI, or AWS SDK.
  • DynamoDB also supports managing the table class using AWS CloudFormation for single-region tables (tables that are not global tables).
  • Each table class offers different pricing for data storage as well as read and write requests.
  • You can select the most cost-effective table class for your table based on its storage and throughput usage patterns.

DynamoDB Table Classes Considerations

  • DynamoDB Standard table class offers lower throughput costs than DynamoDB Standard-IA and is the most cost-effective option for tables where throughput is the dominant cost.
  • DynamoDB Standard-IA table class offers lower storage costs than DynamoDB Standard and is the most cost-effective option for tables where storage is the dominant cost.
  • DynamoDB Standard-IA tables offer the same performance, durability, and availability as DynamoDB Standard tables.
  • Switching between the DynamoDB Standard and DynamoDB Standard-IA table classes does not require changing the application code. You use the same DynamoDB APIs and service endpoints regardless of the table class your tables use.
  • DynamoDB Standard-IA tables are compatible with all existing DynamoDB features such as auto-scaling, on-demand mode, time-to-live (TTL), on-demand backups, point-in-time recovery (PITR), and global secondary indexes.
  • Cost-effectiveness of table class for the table depends on the table’s expected storage and throughput usage patterns. It is recommended to look at the table’s historical storage and throughput cost and usage with AWS Cost and Usage Reports and the AWS Cost Explorer.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

Amazon_DynamoDB_Table_Classes