SQS offers two types of queues – Standard & FIFO queues
SQS Standard vs FIFO Queue Features
Message Order
Standard queues provide best-effort ordering which ensures that messages are generally delivered in the same order as they are sent. Occasionally (because of the highly-distributed architecture that allows high throughput), more than one copy of a message might be delivered out of order
FIFO queues offer first-in-first-out delivery and exactly-once processing: the order in which messages are sent and received is strictly preserved
Delivery
Standard queues guarantee that a message is delivered at least once and duplicates can be introduced into the queue
FIFO queues ensure a message is delivered exactly once and remains available until a consumer processes and deletes it; duplicates are not introduced into the queue
Transactions Per Second (TPS)
Standard queues allow nearly-unlimited number of transactions per second
FIFO queues are limited to 300 transactions per second per API action. It can be increased to 3000 using batching.
Regions
Standard & FIFO queues are now available in all the regions
SQS Buffered Asynchronous Client
FIFO queues aren’t currently compatible with the SQS Buffered Asynchronous Client, where messages are buffered at the client side and sent as a single request to the SQS queue to reduce cost.
AWS Services Supported
Standard Queues are supported by all AWS services
FIFO Queues are currently not supported by all AWS services like
CloudWatch Events
S3 Event Notifications
SNS Topic Subscriptions
Auto Scaling Lifecycle Hooks
AWS IoT Rule Actions
AWS Lambda Dead Letter Queues
Use Cases
Standard queues can be used in any scenario, as long as the application can process messages that arrive more than once and out of order
Decouple live user requests from intensive background work: Let users upload media while resizing or encoding it.
Allocate tasks to multiple worker nodes: Process a high number of credit card validation requests.
Batch messages for future processing: Schedule multiple entries to be added to a database.
FIFO queues are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can’t be tolerated
Ensure that user-entered commands are executed in the right order.
Display the correct product price by sending price modifications in the right order.
Prevent a student from enrolling in a course before registering for an account.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
A restaurant reservation application needs the ability to maintain a waiting list. When a customer tries to reserve a table, and none are available, the customer must be put on the waiting list, and the application must notify the customer when a table becomes free. What service should the Solutions Architect recommend ensuring that the system respects the order in which the customer requests are put onto the waiting list?
Amazon SNS
AWS Lambda with sequential dispatch
A FIFO queue in Amazon SQS
A standard queue in Amazon SQS
A solutions architect is designing an application for a two-step order process. The first step is synchronous and must return to the user with little latency. The second step takes longer, so it will be implemented in a separate component. Orders must be processed exactly once and in the order in which they are received. How should the solutions architect integrate these components?
Use Amazon SQS FIFO queues.
Use an AWS Lambda function along with Amazon SQS standard queues.
Create an SNS topic and subscribe an Amazon SQS FIFO queue to that topic.
Create an SNS topic and subscribe an Amazon SQS Standard queue to that topic.
allows real-time processing of streaming big data and the ability to read and replay records to multiple Amazon Kinesis Applications.
Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications that read from the same Amazon Kinesis stream (for example, to perform counting, aggregation, and filtering).
offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices.
It moves data between distributed application components and helps decouple these components.
provides common middleware constructs such as dead-letter queues and poison-pill management.
provides a generic web services API and can be accessed by any programming language that the AWS SDK supports.
supports both standard and FIFO queues
Scaling
Kinesis Data streams is not fully managed and requires manual provisioning and scaling by increasing shards
SQS is fully managed, highly scalable and requires no administrative overhead and little configuration
Ordering
Kinesis provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Kinesis Applications
SQS Standard Queue does not guarantee data ordering and provides at least once delivery of messages
SQS FIFO Queue guarantees data ordering within the message group
Data Retention Period
Kinesis Data Streams stores the data for up to 24 hours, by default, and can be extended to 365 days
SQS stores the message for up to 4 days, by default, and can be configured from 1 minute to 14 days but clears the message once deleted by the consumer
Delivery Semantics
Kinesis and SQS Standard Queue both guarantee at least one delivery of the message.
SQS FIFO Queue guarantees Exactly once delivery
Parallel Clients
Kinesis supports multiple consumers
SQS allows the messages to be delivered to only one consumer at a time and requires multiple queues to deliver messages to multiple consumers
Use Cases
Kinesis use cases requirements
Ordering of records.
Ability to consume records in the same order a few hours later
Ability for multiple applications to consume the same stream concurrently
Routing related records to the same record processor (as in streaming MapReduce)
SQS uses cases requirements
Messaging semantics like message-level ack/fail and visibility timeout
Leveraging SQS’s ability to scale transparently
Dynamically increasing concurrency/throughput at read time
Individual message delay, which can be delayed
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?
Amazon Kinesis
AWS Data Pipeline
Amazon AppStream
Amazon Simple Queue Service
Your customer is willing to consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?
Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs (Can perform real time analysis and stores data for 24 hours which can be extended to 7 days)
Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs (CloudTrail is only for auditing)
Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs (EMR is for batch analysis)
SQS FIFO Queue provides enhanced messaging between applications with the additional features
FIFO (First-In-First-Out) delivery
order in which messages are sent and received is strictly preserved
key when the order of operations & events is critical
Exactly-once processing
a message is delivered once and remains available until consumer processes and deletes it
key when duplicates can’t be tolerated.
limited to 300 or 3000 (with batching) transactions per second (TPS)
FIFO queues provide all the capabilities of Standard queues, improve upon, and complement the standard queue.
FIFO queues support message groups that allow multiple ordered message groups within a single queue.
FIFO Queue name should end with .fifo
SQS Buffered Asynchronous Client doesn’t currently support FIFO queues
Not all the AWS Services support FIFO like
Auto Scaling Lifecycle Hooks
AWS IoT Rule Actions
AWS Lambda Dead-Letter Queues
Amazon S3 Event Notifications
SQS FIFO supports one or more producers and messages are stored in the order that they were successfully received by SQS.
SQS FIFO queues don’t serve messages from the same message group to more than one consumer at a time.
Message Deduplication
SQS APIs provide deduplication functionality that prevents message producers from sending duplicates.
Message deduplication ID is the token used for the deduplication of sent messages.
If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren’t delivered during the 5-minute deduplication interval.
So basically, any duplicates introduced by the message producer are removed within a 5-minute deduplication interval
Message deduplication applies to an entire queue, not to individual message groups
Message groups
Messages are grouped into distinct, ordered “bundles” within a FIFO queue
Message group ID is the tag that specifies that a message belongs to a specific message group
For each message group ID, all messages are sent and received in strict order
However, messages with different message group ID values might be sent and received out of order.
Every message must be associated with a message group ID, without which the action fails.
SQS delivers the messages in the order in which they arrive for processing if multiple hosts (or different threads on the same host) send messages with the same message group ID
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
A restaurant reservation application needs the ability to maintain a waiting list. When a customer tries to reserve a table, and none are available, the customer must be put on the waiting list, and the application must notify the customer when a table becomes free. What service should the Solutions Architect recommend to ensure that the system respects the order in which the customer requests are put onto the waiting list?
Amazon SNS
AWS Lambda with sequential dispatch
A FIFO queue in Amazon SQS
A standard queue in Amazon SQS
In relation to Amazon SQS, how can you ensure that messages are delivered in order? Select 2 answers
Increase the size of your queue
Send them with a timestamp
Using FIFO queues
Give each message a unique id
Use sequence number within the messages with Standard queues
A company has run a major auction platform where people buy and sell a wide range of products. The platform requires that transactions from buyers and sellers get processed in exactly the order received. At the moment, the platform is implemented using RabbitMQ, which is a light weighted queue system. The company consulted you to migrate the on-premise platform to AWS. How should you design the migration plan? (Select TWO)
When the bids are received, send the bids to an SQS FIFO queue before they are processed.
When the users have submitted the bids from frontend, the backend service delivers the messages to an SQS standard queue.
Add a message group ID to the messages before they are sent to the SQS queue so that the message processing is in a strict order.
Use an EC2 or Lambda to add a deduplication ID to the messages before the messages are sent to the SQS queue to ensure that bids are processed in the right order.
Visibility timeout defines the period where SQS blocks the visibility of the message and prevents other consuming components from receiving and processing that message.
DLQ Redrive policy specifies the source queue, the dead-letter queue, and the conditions under which messages are moved from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.
Short and Long polling control how the queues would be polled and Long polling help reduce empty responses.
Queue and Message Identifiers
Queue URLs
Queue is identified by a unique queue name within the same AWS account
Each queue is assigned with a Queue URL identifier for e.g. http://sqs.us-east-1.amazonaws.com/123456789012/queue2
Queue URL is needed to perform any operation on the Queue.
Message ID
Message IDs are useful for identifying messages
Each message receives a system-assigned message ID that is returned with the SendMessage response.
To delete a message, the message’s receipt handle instead of the message ID is needed
Message ID can be of is 100 characters max
Receipt Handle
When a message is received from a queue, a receipt handle is returned with the message which is associated with the act of receiving the message rather than the message itself.
Receipt handle is required, not the message id, to delete a message or to change the message visibility.
If a message is received more than once, each time it is received, a different receipt handle is assigned and the latest should be used always.
Message Deduplication ID
Message Deduplication ID is used for the deduplication of sent messages.
Message Deduplication ID is applicable for FIFO queues.
If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren’t delivered during the 5-minute deduplication interval.
Message Group ID
Message Group ID specifies that a message belongs to a specific message group.
Message Group ID is applicable for FIFO queues.
Messages that belong to the same message group are always processed one by one, in a strict order relative to the message group.
However, messages that belong to different message groups might be processed out of order.
Visibility timeout
SQS does not delete the message once it is received by a consumer, because the system is distributed, there’s no guarantee that the consumer will actually receive the message (it’s possible the connection could break or the component could fail before receiving the message)
The consumer should explicitly delete the message from the Queue once it is received and successfully processed.
As the message is still available in the Queue, other consumers would be able to receive and process and this needs to be prevented.
SQS handles the above behavior using Visibility timeout.
SQS blocks the visibility of the message for the Visibility timeout period, which is the time during which SQS prevents other consuming components from receiving and processing that message.
Consumer should delete the message within the Visibility timeout. If the consumer fails to delete the message before the visibility timeout expires, the message is visible again to other consumers.
Once Visible the message is available for other consumers to consume and can lead to duplicate messages.
Visibility timeout considerations
Clock starts ticking once SQS returns the message
should be large enough to take into account the processing time for each message
default Visibility timeout for each Queue is 30 seconds and can be changed at the Queue level
when receiving messages, a special visibility timeout for the returned messages can be set without changing the overall queue timeout using the receipt handle
can be extended by the consumer, using ChangeMessageVisibility , if the consumer thinks it won’t be able to process the message within the current visibility timeout period. SQS restarts the timeout period using the new value.
a message’s Visibility timeout extension applies only to that particular receipt of the message and does not affect the timeout for the queue or later receipts of the message
SQS has a 120,000 limit for the number of inflight messages per queue i.e. messages received but not yet deleted and any further messages would receive an error after reaching the limit
Message Lifecycle
Component 1 sends Message A to a queue, and the message is redundantly distributed across the SQS servers.
When Component 2 is ready to process a message, it retrieves messages from the queue, and Message A is returned. While Message A is being processed, it remains in the queue but is not returned to subsequent receive requests for the duration of the visibility timeout.
Component 2 deletes Message A from the queue to avoid the message being received and processed again once the visibility timeout expires.
SQS Dead Letter Queues – DLQ
SQS supports dead-letter queues (DLQ), which other queues (source queues – Standard and FIFO) can target for messages that can’t be processed (consumed) successfully.
Dead-letter queues are useful for debugging the application or messaging system because DLQ help isolates unconsumed messages to determine why their processing doesn’t succeed.
DLQ redrive policy
specifies the source queue, the dead-letter queue, and the conditions under which SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.
specifies which source queues can access the dead-letter queue.
also helps move the messages back to the source queue.
SQS does not create the dead-letter queue automatically. DLQ must first be created before being used.
DLQ for the source queue should be of the same type i.e. Dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue.
DLQ should be in the same account and region as the source queue.
SQS Delay Queues
Delay queues help postpone the delivery of new messages to consumers for a number of seconds
Messages sent to the delay queue remain invisible to consumers for the duration of the delay period.
Minimum delay is 0 seconds (default) and the Maximum is 15 minutes.
Delay queues are similar to visibility timeouts as both features make messages unavailable to consumers for a specific period of time.
The difference between the two is that, for delay queues, a message is hidden when it is first added to the queue, whereas for visibility timeouts a message is hidden only after it is consumed from the queue.
Short and Long polling
SQS provides short polling and long polling to receive messages from a queue.
Short Polling
ReceiveMessage request queries only a subset of the servers (based on a weighted random distribution) to find messages that are available to include in the response.
SQS sends the response right away, even if the query found no messages.
By default, queues use short polling.
Long Polling
ReceiveMessage request queries all of the servers for messages.
SQS sends a response after it collects at least one available message, up to the maximum number of messages specified in the request.
SQS sends an empty response only if the polling wait time expires.
Wait time greater than 0 triggers long polling with a max of 20 secs.
Long polling helps
reduce the cost of using SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request)
reduce false empty responses (when messages are available but aren’t included in a response).
Return messages as soon as they become available.
AWS Certification Exam Practice Questions
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
How does Amazon SQS allow multiple readers to access the same message queue without losing messages or processing them many times?
By identifying a user by his unique id
By using unique cryptography
Amazon SQS queue has a configurable visibility timeout
Multiple readers can’t access the same message queue
If a message is retrieved from a queue in Amazon SQS, how long is the message inaccessible to other users by default?
0 seconds
1 hour
1 day
forever
30 seconds
When a Simple Queue Service message triggers a task that takes 5 minutes to complete, which process below will result in successful processing of the message and remove it from the queue while minimizing the chances of duplicate processing?
Retrieve the message with an increased visibility timeout, process the message, delete the message from the queue
Retrieve the message with an increased visibility timeout, delete the message from the queue, process the message
Retrieve the message with increased DelaySeconds, process the message, delete the message from the queue
Retrieve the message with increased DelaySeconds, delete the message from the queue, process the message
You need to process long-running jobs once and only once. How might you do this?
Use an SNS queue and set the visibility timeout to long enough for jobs to process.
Use an SQS queue and set the reprocessing timeout to long enough for jobs to process.
Use an SQS queue and set the visibility timeout to long enough for jobs to process.
Use an SNS queue and set the reprocessing timeout to long enough for jobs to process.
You are getting a lot of empty receive requests when using Amazon SQS. This is making a lot of unnecessary network load on your instances. What can you do to reduce this load?
Subscribe your queue to an SNS topic instead.
Use as long of a poll as possible, instead of short polls.
Alter your visibility timeout to be shorter.
Use sqsd on your EC2 instances.
Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number of empty responses?
Set the imaging queue visibility Timeout attribute to 20 seconds
Set the Imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds (Long polling. Refer link)
Set the imaging queue MessageRetentionPeriod attribute to 20 seconds
Set the DelaySeconds parameter of a message to 20 seconds
Simple Queue Service – SQS is a highly available distributed queue system
A queue is a temporary repository for messages awaiting processing and acts as a buffer between the component producer and the consumer
is a message queue service used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components.
is fully managed and requires no administrative overhead and little configuration
offers a reliable, highly-scalable, hosted queue for storing messages in transit between applications.
provides fault-tolerant, loosely coupled, flexibility of distributed components of applications to send & receive without requiring each component to be concurrently available
helps build distributed applications with decoupled components
supports encryption at rest and encryption in transit using the HTTP over SSL (HTTPS) and Transport Layer Security (TLS) protocols for security.
Standard queues support at-least-once message delivery. However, occasionally (because of the highly distributed architecture that allows nearly unlimited throughput), more than one copy of a message might be delivered out of order.
Standard queues support a nearly unlimited number of API calls per second, per API action (SendMessage, ReceiveMessage, or DeleteMessage).
Standard queues provide best-effort ordering which ensures that messages are generally delivered in the same order as they’re sent.
FIFO (First-In-First-Out) queues provide messages in order and exactly once delivery.
FIFO queues have all the capabilities of the standard queues but are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can’t be tolerated.
Decouple components of a distributed application that may not all process the same amount of work simultaneously.
Buffer and Batch Operations
Add scalability and reliability to the architecture and smooth out temporary volume spikes without losing messages or increasing latency
Request Offloading
Move slow operations off of interactive request paths by enqueueing the request.
Fan-out
Combine SQS with SNS to send identical copies of a message to multiple queues in parallel for simultaneous processing.
Auto Scaling
SQS queues can be used to determine the load on an application, and combined with Auto Scaling, the EC2 instances can be scaled in or out, depending on the volume of traffic
How SQS Queues Works
SQS allows queues to be created, deleted and messages can be sent and received from it
SQS queue retains messages for four days, by default.
Queues can be configured to retain messages for 1 minute to 14 days after the message has been sent.
SQS can delete a queue without notification if any action hasn’t been performed on it for 30 consecutive days.
SQS allows the deletion of the queue with messages in it
Visibility timeout defines the period where SQS blocks the visibility of the message and prevents other consuming components from receiving and processing that message.
DLQ Redrive policy specifies the source queue, the dead-letter queue, and the conditions under which SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.
SQS Short and Long polling control how the queues would be polled and Long polling help reduce empty responses.
SQS Buffered Asynchronous Client
Amazon SQS Buffered Async Client for Java provides an implementation of the AmazonSQSAsyncClient interface and adds several important features:
Automatic batching of multiple SendMessage, DeleteMessage, or ChangeMessageVisibility requests without any required changes to the application
Prefetching of messages into a local buffer that allows the application to immediately process messages from SQS without waiting for the messages to be retrieved
Working together, automatic batching and prefetching increase the throughput and reduce the latency of the application while reducing the costs by making fewer SQS requests.
SQS Security and reliability
SQS stores all message queues and messages within a single, highly-available AWS region with multiple redundant Availability Zones (AZs)
SQS supports HTTP over SSL (HTTPS) and Transport Layer Security (TLS) protocols.
SQS supports Encryption at Rest. SSE encrypts messages as soon as SQS receives them and decrypts messages only when they are sent to an authorized consumer.
SQS also supports resource-based permissions
SQS Design Patterns
Priority Queue Pattern
Use SQS to prepare multiple queues for the individual priority levels.
Place those processes to be executed immediately (job requests) in the high priority queue.
Prepare numbers of batch servers, for processing the job requests of the queues, depending on the priority levels.
Queues have a message “Delayed Send” function, which can be used to delay the time for starting a process.
SQS Job Observer Pattern
Enqueue job requests as SQS messages.
Have the batch server dequeue and process messages from SQS.
Set up Auto Scaling to automatically increase or decrease the number of batch servers, using the number of SQS messages, with CloudWatch, as the trigger to do so.
Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
Open to further feedback, discussion and correction.
Which AWS service can help design architecture to persist in-flight transactions?
Elastic IP Address
SQS
Amazon CloudWatch
Amazon ElastiCache
A company has a workflow that sends video files from their on-premise system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario?
SQS guarantees the order of the messages.
SQS synchronously provides transcoding output.
SQS checks the health of the worker instances.
SQS helps to facilitate horizontal scaling of encoding tasks
Which statement best describes an Amazon SQS use case?
Automate the process of sending an email notification to administrators when the CPU utilization reaches 70% on production servers (Amazon EC2 instances) (CloudWatch + SNS + SES)
Create a video transcoding website where multiple components need to communicate with each other, but can’t all process the same amount of work simultaneously (SQS provides loose coupling)
Coordinate work across distributed web services to process employee’s expense reports (SWF – Steps in order and might need manual steps)
Distribute static web content to end users with low latency across multiple countries (CloudFront + S3)
Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of spot EC2 instances. Files submitted by your premium customers must be transformed with the highest priority. How should you implement such a system?
Use a DynamoDB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level.
Use Route 53 latency based-routing to send high priority tasks to the closest transformation instances.
Use two SQS queues, one for high priority messages, and the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue
Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first.
Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which service should you use?
Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.
Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database
Amazon ElastiCache to store the writes until the writes are committed to the database.
Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.
A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web application hosted on Amazon Elastic Computer Cloud (EC2). The application has dependencies on an on-premises mainframe database that uses a BASE (Basic Available, Soft state, Eventual consistency) rather than an ACID (Atomicity, Consistency, Isolation, Durability) consistency model. The application is exhibiting undesirable behavior because the database is not able to handle the volume of writes. How can you reduce the load on your on-premises database resources in the most cost-effective way?
Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the onpremises database and a Hadoop cluster on AWS.
Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database
Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database.
Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline.
An organization has created a Queue named “modularqueue” with SQS. The organization is not performing any operations such as SendMessage, ReceiveMessage, DeleteMessage, GetQueueAttributes, SetQueueAttributes, AddPermission, and RemovePermission on the queue. What can happen in this scenario?
AWS SQS sends notification after 15 days for inactivity on queue
AWS SQS can delete queue after 30 days without notification
AWS SQS marks queue inactive after 30 days
AWS SQS notifies the user after 2 weeks and deletes the queue after 3 weeks.
A user is using the AWS SQS to decouple the services. Which of the below mentioned operations is not supported by SQS?
SendMessageBatch
DeleteMessageBatch
CreateQueue
DeleteMessageQueue
A user has created a queue named “awsmodule” with SQS. One of the consumers of queue is down for 3 days and then becomes available. Will that component receive message from queue?
Yes, since SQS by default stores message for 4 days
No, since SQS by default stores message for 1 day only
No, since SQS sends message to consumers who are available that time
Yes, since SQS will not delete message until it is delivered to all consumers
A user has created a queue named “queue2” in US-East region with AWS SQS. The user’s AWS account ID is 123456789012. If the user wants to perform some action on this queue, which of the below Queue URL should he use?
A user has created a queue named “myqueue” with SQS. There are four messages published to queue, which are not received by the consumer yet. If the user tries to delete the queue, what will happen?
A user can never delete a queue manually. AWS deletes it after 30 days of inactivity on queue
It will delete the queue
It will initiate the delete but wait for four days before deleting until all messages are deleted automatically.
I t will ask user to delete the messages first
A user has developed an application, which is required to send the data to a NoSQL database. The user wants to decouple the data sending such that the application keeps processing and sending data but does not wait for an acknowledgement of DB. Which of the below mentioned applications helps in this scenario?
AWS Simple Notification Service
AWS Simple Workflow
AWS Simple Queue Service
AWS Simple Query Service
You are building an online store on AWS that uses SQS to process your customer orders. Your backend system needs those messages in the same sequence the customer orders have been put in. How can you achieve that?
It is not possible to do this with SQS
You can use sequencing information on each message
You can do this with SQS but you also need to use SWF
Messages will arrive in the same order by default
A user has created a photo editing software and hosted it on EC2. The software accepts requests from the user about the photo format and resolution and sends a message to S3 to enhance the picture accordingly. Which of the below mentioned AWS services will help make a scalable software with the AWS infrastructure in this scenario?
AWS Glacier
AWS Elastic Transcoder
AWS Simple Notification Service
AWS Simple Queue Service
Refer to the architecture diagram of a batch processing solution using Simple Queue Service (SQS) to set up a message queue between EC2 instances, which are used as batch processors. Cloud Watch monitors the number of Job requests (queued messages) and an Auto Scaling group adds or deletes batch servers automatically based on parameters set in Cloud Watch alarms. You can use this architecture to implement which of the following features in a cost effective and efficient manner?
Reduce the overall time for executing jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance in a daisy-chain setup.
Implement fault tolerance against EC2 instance failure since messages would remain in SQS and worn can continue with recovery of EC2 instances implement fault tolerance against SQS failure by backing up messages to S3.
Implement message passing between EC2 instances within a batch by exchanging messages through SOS.
Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness
Handle high priority jobs before lower priority jobs by assigning a priority metadata field to SQS messages.
How does Amazon SQS allow multiple readers to access the same message queue without losing messages or processing them many times?
By identifying a user by his unique id
By using unique cryptography
Amazon SQS queue has a configurable visibility timeout
Multiple readers can’t access the same message queue
A user has created photo editing software and hosted it on EC2. The software accepts requests from the user about the photo format and resolution and sends a message to S3 to enhance the picture accordingly. Which of the below mentioned AWS services will help make a scalable software with the AWS infrastructure in this scenario?
AWS Elastic Transcoder
AWS Simple Notification Service
AWS Simple Queue Service
AWS Glacier
How do you configure SQS to support longer message retention?
Set the MessageRetentionPeriod attribute using the SetQueueAttributes method
Using a Lambda function
You can’t. It is set to 14 days and cannot be changed
You need to request it from AWS
A user has developed an application, which is required to send the data to a NoSQL database. The user wants to decouple the data sending such that the application keeps processing and sending data but does not wait for an acknowledgement of DB. Which of the below mentioned applications helps in this scenario?
AWS Simple Notification Service
AWS Simple Workflow
AWS Simple Query Service
AWS Simple Queue Service
If a message is retrieved from a queue in Amazon SQS, how long is the message inaccessible to other users by default?
0 seconds
1 hour
1 day
forever
30 seconds
Which of the following statements about SQS is true?
Messages will be delivered exactly once and messages will be delivered in First in, First out order
Messages will be delivered exactly once and message delivery order is indeterminate
Messages will be delivered one or more times and messages will be delivered in First in, First out order
Messages will be delivered one or more times and message delivery order is indeterminate (Before the introduction of FIFO queues)
How long can you keep your Amazon SQS messages in Amazon SQS queues?
From 120 secs up to 4 weeks
From 10 secs up to 7 days
From 60 secs up to 2 weeks
From 30 secs up to 1 week
When a Simple Queue Service message triggers a task that takes 5 minutes to complete, which process below will result in successful processing of the message and remove it from the queue while minimizing the chances of duplicate processing?
Retrieve the message with an increased visibility timeout, process the message, delete the message from the queue
Retrieve the message with an increased visibility timeout, delete the message from the queue, process the message
Retrieve the message with increased DelaySeconds, process the message, delete the message from the queue
Retrieve the message with increased DelaySeconds, delete the message from the queue, process the message
You need to process long-running jobs once and only once. How might you do this?
Use an SNS queue and set the visibility timeout to long enough for jobs to process.
Use an SQS queue and set the reprocessing timeout to long enough for jobs to process.
Use an SQS queue and set the visibility timeout to long enough for jobs to process.
Use an SNS queue and set the reprocessing timeout to long enough for jobs to process.
You are getting a lot of empty receive requests when using Amazon SQS. This is making a lot of unnecessary network load on your instances. What can you do to reduce this load?
Subscribe your queue to an SNS topic instead.
Use as long of a poll as possible, instead of short polls. (Refer link)
Alter your visibility timeout to be shorter.
Use <code>sqsd</code> on your EC2 instances.
You have an asynchronous processing application using an Auto Scaling Group and an SQS Queue. The Auto Scaling Group scales according to the depth of the job queue. The completion velocity of the jobs has gone down, the Auto Scaling Group size has maxed out, but the inbound job velocity did not increase. What is a possible issue?
Some of the new jobs coming in are malformed and unprocessable. (As other options would cause the job to stop processing completely, the only reasonable option seems that some of the recent messages must be malformed and unprocessable)
The routing tables changed and none of the workers can process events anymore. (If changed, none of the jobs would be processed)
Someone changed the IAM Role Policy on the instances in the worker group and broke permissions to access the queue. (If IAM role changed no jobs would be processed)
The scaling metric is not functioning correctly. (scaling metric did work fine as the autoscaling caused the instances to increase)
Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number of empty responses?
Set the imaging queue visibility Timeout attribute to 20 seconds
Set the Imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds (Long polling. Refer link)
Set the imaging queue MessageRetentionPeriod attribute to 20 seconds
Set the DelaySeconds parameter of a message to 20 seconds
is a temporary data repository for messages and provides a reliable, highly scalable, hosted message queuing service for temporary storage and delivery of short (up to 256 KB) text-based data messages.
supports a virtually unlimited number of queues and supports unordered, at-least-once delivery of messages.
Ideal Usage patterns
is ideally suited to any scenario where multiple application components must communicate and coordinate their work in a loosely coupled manner particularly producer consumer scenarios
can be used to coordinate a multi-step processing pipeline, where each message is associated with a task that must be processed.
enables the number of worker instances to scale up or down, and also enable the processing power of each single worker instance to scale up or down, to suit the total workload, without any application changes.
Anti-Patterns
Binary or Large Messages
SQS is suited for text messages with maximum size of 64 KB. If the application requires binary or messages exceeding the length, it is best to use Amazon S3 or RDS and use SQS to store the pointer
Long Term storage
SQS stores messages for max 14 days and if application requires storage period longer than 14 days, Amazon S3 or other storage options should be preferred
High-speed message queuing or very short tasks
If the application requires a very high-speed message send and receive response from a single producer or consumer, use of Amazon DynamoDB or a message-queuing system hosted on Amazon EC2 may be more appropriate.
Performance
is a distributed queuing system that is optimized for horizontal scalability, not for single-threaded sending or receiving speeds.
A single client can send or receive Amazon SQS messages at a rate of about 5 to 50 messages per second. Higher receive performance can be achieved by requesting multiple messages (up to 10) in a single call.
Durability & Availability
are highly durable but temporary.
stores all messages redundantly across multiple servers and data centers.
Message retention time is configurable on a per-queue basis, from a minimum of one minute to a maximum of 14 days.
Messages are retained in a queue until they are explicitly deleted, or until they are automatically deleted upon expiration of the retention time.
Cost Model
pricing is based on
number of requests and
the amount of data transferred in and out (priced per GB per month).
Scalability & Elasticity
is both highly elastic and massively scalable.
is designed to enable a virtually unlimited number of computers to read and write a virtually unlimited number of messages at any time.
supports virtually unlimited numbers of queues and messages per queue for any user.
Amazon Redshift
is a fast, fully-managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools.
is optimized for datasets that range from a few hundred gigabytes to a petabyte or more.
manages the work needed to set up, operate, and scale a data warehouse, from provisioning the infrastructure capacity to automating ongoing administrative tasks such as backups and patching.
Ideal Usage Pattern
is ideal for analyzing large datasets using the existing business intelligence tools
Common use cases include
Analyze global sales data for multiple products
Store historical stock trade data
Analyze ad impressions and clicks
Aggregate gaming data
Analyze social trends
Measure clinical quality, operation efficiency, and financial
performance in the health care space
Anti-Pattern
OLTP workloads
Redshift is a column-oriented database and more suited for data warehousing and analytics. If application involves online transaction processing, Amazon RDS would be a better choice.
Blob data
For Blob storage, Amazon S3 would be a better choice with metadata in other storage as RDS or DynamoDB
Performance
Amazon Redshift allows a very high query performance on datasets ranging in size from hundreds of gigabytes to a petabyte or more.
It uses columnar storage, data compression, and zone maps to reduce the amount of I/O needed to perform queries.
It has a massively parallel processing (MPP) architecture that parallelizes and distributes SQL operations to take advantage of all available resources.
Underlying hardware is designed for high performance data processing that uses local attached storage to maximize throughput.
Durability & Availability
Amazon Redshift stores three copies of your data—all data written to a node in your cluster is automatically replicated to other nodes within the cluster, and all data is continuously backed up to Amazon S3.
Snapshots are automated, incremental, and continuous and stored for a user-defined period (1-35 days)
Manual snapshots can be created and are retained until explicitly deleted.
Amazon Redshift also continuously monitors the health of the cluster and automatically re-replicates data from failed drives and replaces nodes as necessary.
Cost Model
has three pricing components:
data warehouse node hours – total number of hours run across all the compute node
backup storage – storage cost for automated and manual snapshots
data transfer
There is no data transfer charge for data transferred to or from Amazon Redshift outside of Amazon VPC
Data transfer to or from Amazon Redshift in Amazon VPC accrues standard AWS data transfer charges.
Scalability & Elasticity
provides push button scaling and the number of nodes can be easily scaled in the data warehouse cluster as the demand changes.
Redshift places the existing cluster in the read only mode, so the existing queries can continue to run, while is provisions a new cluster with chosen size and copies the data to it. Once the data is copied, it automatically redirects queries to the new cluster