AWS Data Pipeline

AWS Data Pipeline

  • AWS Data Pipeline is a web service that makes it easy to automate and schedule regular data movement and data processing activities in AWS
  • helps define data-driven workflows
  • integrates with on-premises and cloud-based storage systems
  • helps quickly define a pipeline, which defines a dependent chain of data sources, destinations, and predefined or custom data processing activities
  • supports scheduling where the pipeline regularly performs processing activities such as distributed data copy, SQL transforms, EMR applications, or custom scripts against destinations such as S3, RDS, or DynamoDB.
  • ensures that the pipelines are robust and highly available by executing the scheduling, retry, and failure logic for the workflows as a highly scalable and fully managed service.

AWS Data Pipeline features

  • Distributed, fault-tolerant, and highly available
  • Managed workflow orchestration service for data-driven workflows
  • Infrastructure management service, as it will provision and terminate resources as required
  • Provides dependency resolution
  • Can be scheduled
  • Supports Preconditions for readiness checks.
  • Grants control over retries, including frequency and number
  • Native integration with S3, DynamoDB, RDS, EMR, EC2 and Redshift
  • Support for both AWS based and external on-premise resources

AWS Data Pipeline Concepts

Pipeline Definition

  • Pipeline definition helps the business logic to be communicated to the AWS Data Pipeline
  • Pipeline definition defines the location of data (Data Nodes), activities to be performed, the schedule, resources to run the activities, per-conditions, and actions to be performed

Pipeline Components, Instances, and Attempts

  • Pipeline components represent the business logic of the pipeline and are represented by the different sections of a pipeline definition.
  • Pipeline components specify the data sources, activities, schedule, and preconditions of the workflow
  • When AWS Data Pipeline runs a pipeline, it compiles the pipeline components to create a set of actionable instances and contains all the information needed to perform a specific task
  • Data Pipeline provides durable and robust data management as it retries a failed operation depending on frequency & defined number of retries

Task Runners

  • A task runner is an application that polls AWS Data Pipeline for tasks and then performs those tasks
  • When Task Runner is installed and configured,
    • it polls AWS Data Pipeline for tasks associated with activated pipelines
    • after a task is assigned to Task Runner, it performs that task and reports its status back to Pipeline.
  • A task is a discreet unit of work that the Pipeline service shares with a task runner and differs from a pipeline, which defines activities and resources that usually yields several tasks
  • Tasks can be executed either on the AWS Data Pipeline managed or user-managed resources.

Data Nodes

  • Data Node defines the location and type of data that a pipeline activity uses as source (input) or destination (output)
  • supports S3, Redshift, DynamoDB, and SQL data nodes

Databases

  • supports JDBC, RDS, and Redshift database

Activities

  • An activity is a pipeline component that defines the work to perform
  • Data Pipeline provides pre-defined activities for common scenarios like sql transformation, data movement, hive queries, etc
  • Activities are extensible and can be used to run own custom scripts to support endless combinations

Preconditions

  • Precondition is a pipeline component containing conditional statements that must be satisfied (evaluated to True) before an activity can run
  • A pipeline supports
    • System-managed preconditions
      • are run by the AWS Data Pipeline web service on your behalf and do not require a computational resource
      • Includes source data and keys check for e.g. DynamoDB data, table exists or S3 key exists or prefix not empty
    • User-managed preconditions
      • run on user defined and managed computational resources
      • Can be defined as Exists check or Shell command

Resources

  • A resource is a computational resource that performs the work that a pipeline activity specifies
  • supports AWS Data Pipeline-managed and self-managed resources
  • AWS Data Pipeline-managed resources include EC2 and EMR, which are launched by the Data Pipeline service only when they’re needed
  • Self managed on-premises resources can also be used, where a Task Runner package is installed which  continuously polls the AWS Data Pipeline service for work to perform
  • Resources can run in the same region as their working data set or even on a region different than AWS Data Pipeline
  • Resources launched by AWS Data Pipeline are counted within the resource limits and should be taken into account

Actions

  • Actions are steps that a pipeline takes when a certain event like success, or failure occurs.
  • Pipeline supports SNS notifications and termination action on resources

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An International company has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery capability in a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be able to provision the web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements. Which design would you choose to meet these requirements?
    1. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. Create a ‘Lastupdated’ attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter. (Refer Blog Post)
    2. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region. (No Schedule and throughput control)
    3. Use AWS data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region. (With AWS Data pipeline the data can be copied directly to other DynamoDB table)
    4. Send each item into an SQS queue in the second region; use an auto-scaling group behind the SQS queue to replay the write in the second region. (Not Automated to replay the write)
  2. Your company produces customer commissioned one-of-a-kind skiing helmets combining nigh fashion with custom technical enhancements. Customers can show off their Individuality on the ski slopes and have access to head-up-displays, GPS rear-view cams and any other technical innovation they wish to embed in the helmet. The current manufacturing process is data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the highest standards. Assessments are a mixture of human and automated assessments you need to add a new set of assessment to model the failure modes of the custom electronics using GPUs with CUD across a cluster of servers with low latency networking. What architecture would allow you to automate the existing process using a hybrid approach and ensure that the architecture can support the evolution of processes over time?
    1. Use AWS Data Pipeline to manage movement of data & meta-data and assessments. Use an auto-scaling group of G2 instances in a placement group. (Involves mixture of human assessments)
    2. Use Amazon Simple Workflow (SWF) to manage assessments, movement of data & meta-data. Use an autoscaling group of G2 instances in a placement group. (Human and automated assessments with GPU and low latency networking)
    3. Use Amazon Simple Workflow (SWF) to manage assessments movement of data & meta-data. Use an autoscaling group of C3 instances with SR-IOV (Single Root I/O Virtualization). (C3 and SR-IOV won’t provide GPU as well as Enhanced networking needs to be enabled)
    4. Use AWS data Pipeline to manage movement of data & meta-data and assessments use auto-scaling group of C3 with SR-IOV (Single Root I/O virtualization). (Involves mixture of human assessments)

References

AWS Database Services Cheat Sheet

AWS Database Services Cheat Sheet

AWS Database Services

Relational Database Service – RDS

  • provides Relational Database service
  • supports MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL Server, and the new, MySQL-compatible Amazon Aurora DB engine
  • as it is a managed service, shell (root ssh) access is not provided
  • manages backups, software patching, automatic failure detection, and recovery
  • supports use initiated manual backups and snapshots
  • daily automated backups with database transaction logs enables Point in Time recovery up to the last five minutes of database usage
  • snapshots are user-initiated storage volume snapshot of DB instance, backing up the entire DB instance and not just individual databases that can be restored as a independent RDS instance
  • RDS Security
    • support encryption at rest using KMS as well as encryption in transit using SSL endpoints
    • supports IAM database authentication, which prevents the need to store static user credentials in the database, because authentication is managed externally using IAM.
    • supports Encryption only during creation of an RDS DB instance
    • existing unencrypted DB cannot be encrypted and you need to create a  snapshot, created a encrypted copy of the snapshot and restore as encrypted DB
    • supports Secret Manager for storing and rotating secrets
    • for encrypted database
      • logs, snapshots, backups, read replicas are all encrypted as well
      • cross region replicas and snapshots does not work across region (Note – this is possible now with latest AWS enhancement)
  • Multi-AZ deployment
    • provides high availability and automatic failover support and is NOT a scaling solution
    • maintains a synchronous standby replica in a different AZ
    • transaction success is returned only if the commit is successful both on the primary and the standby DB
    • Oracle, PostgreSQL, MySQL, and MariaDB DB instances use Amazon technology, while SQL Server DB instances use SQL Server Mirroring
    • snapshots and backups are taken from standby & eliminate I/O freezes
    • during automatic failover, its seamless and RDS switches to the standby instance and updates the DNS record to point to standby
    • failover can be forced with the Reboot with failover option
  • Read Replicas
    • uses the PostgreSQL, MySQL, and MariaDB DB engines’ built-in replication functionality to create a separate Read Only instance
    • updates are asynchronously copied to the Read Replica, and data might be stale
    • can help scale applications and reduce read only load
    • requires automatic backups enabled
    • replicates all databases in the source DB instance
    • for disaster recovery, can be promoted to a full fledged database
    • can be created in a different region for disaster recovery, migration and low latency across regions
    • can’t create encrypted read replicas from unencrypted DB or read replica
  • RDS does not support all the features of underlying databases, and if required the database instance can be launched on an EC2 instance
  • RDS Components
    • DB parameter groups contains engine configuration values that can be applied to one or more DB instances of the same instance type for e.g. SSL, max connections etc.
    • Default DB parameter group cannot be modified, create a custom one and attach to the DB
    • Supports static and dynamic parameters
      • changes to dynamic parameters are applied immediately (irrespective of apply immediately setting)
      • changes to static parameters are NOT applied immediately and require a manual reboot.
  • RDS Monitoring & Notification
    • integrates with CloudWatch and CloudTrail
    • CloudWatch provides metrics about CPU utilization from the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance
    • Performance Insights is a database performance tuning and monitoring feature that helps illustrate the database’s performance and help analyze any issues that affect it
    • supports RDS Event Notification which uses the SNS to provide notification when an RDS event like creation, deletion or snapshot creation etc occurs

Aurora

  • is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases
  • is a managed services and handles time-consuming tasks such as provisioning, patching, backup, recovery, failure detection and repair
  • is a proprietary technology from AWS (not open sourced)
  • provides PostgreSQL and MySQL compatibility
  • is “AWS cloud optimized” and claims 5x performance improvement
    over MySQL on RDS, over 3x the performance of PostgreSQL on RDS
  • scales storage automatically in increments of 10GB, up to 64 TB with no impact to database performance. Storage is striped across 100s of volumes.
  • no need to provision storage in advance.
  • provides self-healing storage. Data blocks and disks are continuously scanned for errors and repaired automatically.
  • provides instantaneous failover
  • replicates each chunk of my the database volume six ways across three Availability Zones i.e. 6 copies of the data across 3 AZ
    • requires 4 copies out of 6 needed for writes
    • requires 3 copies out of 6 need for reads
  • costs more than RDS (20% more) – but is more efficient
  • Read Replicas
    • can have 15 replicas while MySQL has 5, and the replication process is faster (sub 10 ms replica lag)
    • share the same data volume as the primary instance in the same AWS Region, there is virtually no replication lag
    • supports Automated failover for master in less than 30 seconds
    • supports Cross Region Replication using either physical or logical replication.
  • Security
    • supports Encryption at rest using KMS
    • supports Encryption in flight using SSL (same process as MySQL or Postgres)
    • Automated backups, snapshots and replicas are also encrypted
    • Possibility to authenticate using IAM token (same method as RDS)
    • supports protecting the instance with security groups
    • does not support SSH access to the underlying servers
  • Aurora Serverless
    • provides automated database Client  instantiation and on-demand  autoscaling based on actual usage
    • provides a relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads
    • automatically starts up, shuts down, and scales capacity up or down based on the application’s needs. No capacity planning needed
    • Pay per second, can be more cost-effective
  • Aurora Global Database
    • allows a single Aurora database to span multiple AWS regions.
    • provides Physical replication, which uses dedicated infrastructure that leaves the databases entirely available to serve the application
    • supports 1 Primary Region (read / write)
    • replicates across up to 5 secondary (read-only) regions, replication lag is less than 1 second
    • supports up to 16 Read Replicas per secondary region
    • recommended for low-latency global reads and disaster recovery with an RTO of < 1 minute
    • failover is not automated and if the primary region becomes unavailable, a secondary region can be manually removed from an Aurora Global Database and promote it to take full reads and writes. Application needs to be updated to point to the newly promoted region.
  • Aurora Backtrack
    • Backtracking “rewinds” the DB cluster to the specified time
    • Backtracking performs in place restore and does not create a new instance. There is a minimal downtime associated with it.
  • Aurora Clone feature allows quick and cost-effective creation of Aurora Cluster duplicates
  • supports parallel or distributed query using Aurora Parallel Query, which refers to the ability to push down and distribute the computational load of a single query across thousands of CPUs in Aurora’s storage layer.

DynamoDB

  • fully managed NoSQL database service
  • synchronously replicates data across three facilities in an AWS Region, giving high availability and data durability
  • runs exclusively on SSDs to provide high I/O performance
  • provides provisioned table reads and writes
  • automatically partitions, reallocates, and re-partitions the data and provisions additional server capacity as data or throughput changes
  • creates and maintains indexes for the primary key attributes for efficient access to data in the table
  • DynamoDB Table classes currently support
    • DynamoDB Standard table class is the default and is recommended for the vast majority of workloads.
    • DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class which is optimized for tables where storage is the dominant cost.
  • supports Secondary Indexes
    • allows querying attributes other than the primary key attributes without impacting performance.
    • are automatically maintained as sparse objects
  • Local secondary index vs Global secondary index
    • shares partition key + different sort key vs different partition + sort key
    • search limited to partition vs across all partition
    • unique attributes vs non-unique attributes
    • linked to the base table vs independent separate index
    • only created during the base table creation vs can be created later
    • cannot be deleted after creation vs can be deleted
    • consumes provisioned throughput capacity of the base table vs independent throughput
    • returns all attributes for item vs only projected attributes
    • Eventually or Strongly vs Only Eventually consistent reads
    • size limited to 10Gb per partition vs unlimited
  • DynamoDB Consistency
    • provides Eventually consistent (by default) or Strongly Consistent option to be specified during a read operation
    • supports Strongly consistent reads for a few operations like Query, GetItem, and BatchGetItem using the ConsistentRead parameter
  • DynamoDB Throughput Capacity
    • supports On-demand and Provisioned read/write capacity modes
    • Provisioned mode requires the number of reads and writes per second as required by the application to be specified
    • On-demand mode provides flexible billing option capable of serving thousands of requests per second without capacity planning
  • DynamoDB Auto Scaling helps dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns.
  • DynamoDB Adaptive capacity is a feature that enables DynamoDB to run imbalanced workloads indefinitely.
  • DynamoDB Global Tables provide multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table
  • DynamoDB Time to Live (TTL)
    • enables a per-item timestamp to determine when an item expiry
    • expired items are deleted from the table without consuming any write throughput.
  • DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second.
  • DynamoDB cross-region replication
    • allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions.
    • using DynamoDB streams which leverages Kinesis and provides time-ordered sequence of item-level changes and can help for lower RPO, lower RTO disaster recovery
  • DynamoDB Triggers (just like database triggers) are a feature that allows the execution of custom actions based on item-level updates on a table.
  • VPC Gateway Endpoints provide private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.

ElastiCache

  • managed web service that provides in-memory caching to deploy and run Memcached or Redis protocol-compliant cache clusters
  • ElastiCache with Redis,
    • like RDS, supports Multi-AZ, Read Replicas and Snapshots
    • Read Replicas are created across AZ within same region using Redis’s asynchronous replication technology
    • Multi-AZ differs from RDS as there is no standby, but if the primary goes down a Read Replica is promoted as primary
    • Read Replicas cannot span across regions, as RDS supports
    • cannot be scaled out and if scaled up cannot be scaled down
    • allows snapshots for backup and restore
    • AOF can be enabled for recovery scenarios, to recover the data in case the node fails or service crashes. But it does not help in case the underlying hardware fails
    • Enabling Redis Multi-AZ as a Better Approach to Fault Tolerance
  • ElastiCache with Memcached
    • can be scaled up by increasing size and scaled out by adding nodes
    • nodes can span across multiple AZs within the same region
    • cached data is spread across the nodes, and a node failure will always result in some data loss from the cluster
    • supports auto discovery
    • every node should be homogenous and of same instance type
  • ElastiCache Redis vs Memcached
    • complex data objects vs simple key value storage
    • persistent vs non persistent, pure caching
    • automatic failover with Multi-AZ vs Multi-AZ not supported
    • scaling using Read Replicas vs using multiple nodes
    • backup & restore supported vs not supported
  • can be used state management to keep the web application stateless

Redshift

  • fully managed, fast and powerful, petabyte scale data warehouse service
  • uses replication and continuous backups to enhance availability and improve data durability and can automatically recover from node and component failures
  • provides Massive Parallel Processing (MPP) by distributing & parallelizing queries across multiple physical resources
  • columnar data storage improving query performance and allowing advance compression techniques
  • only supports Single-AZ deployments and the nodes are available within the same AZ, if the AZ supports Redshift clusters
  • spot instances are NOT an option

AWS Simple Notification Service – SNS

SNS Delivery Protocols

Simple Notification Service – SNS

  • Simple Notification Service – SNS is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients.
  • SNS provides the ability to create a Topic which is a logical access point and communication channel.
  • Each topic has a unique name that identifies the SNS endpoint for publishers to post messages and subscribers to register for notifications.
  • Producers and Consumers communicate asynchronously with subscribers by producing and sending a message on a topic.
  • Producers push messages to the topic, they created or have access to, and SNS matches the topic to a list of subscribers who have subscribed to that topic and delivers the message to each of those subscribers.
  • Subscribers receive all messages published to the topics to which they subscribe, and all subscribers to a topic receive the same messages.
  • Subscribers (i.e., web servers, email addresses, SQS queues, AWS Lambda functions) consume or receive the message or notification over one of the supported protocols (i.e., SQS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic.

SNS Delivery Protocols

Accessing SNS

  • Amazon Management console
    • Amazon Management console is the web-based user interface that can be used to manage SNS
  • AWS Command-line Interface (CLI)
    • Provides commands for a broad set of AWS products, and is supported on Windows, Mac, and Linux.
  • AWS Tools for Windows Powershell
    • Provides commands for a broad set of AWS products for those who script in the PowerShell environment
  • AWS SNS Query API
    • Query API allows for requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action
  • AWS SDK libraries
    • AWS provides libraries in various languages which provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses

SNS Supported Transport Protocols

  • HTTP, HTTPS – Subscribers specify a URL as part of the subscription registration; notifications will be delivered through an HTTP POST to the specified URL.
  • Email, Email-JSON – Messages are sent to registered addresses as email. Email-JSON sends notifications as a JSON object, while Email sends text-based email.
  • SQS – Users can specify an SQS queue as the endpoint; SNS will enqueue a notification message to the specified queue (which subscribers can then process using SQS APIs such as ReceiveMessage, DeleteMessage, etc.)
  • SMS – Messages are sent to registered phone numbers as SMS text messages

SNS Supported Endpoints

  • Email Notifications
    • SNS provides the ability to send Email notifications
  • Mobile Push Notifications
    • SNS provides an ability to send push notification messages directly to apps on mobile devices. Push notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts
    • Supported push notification services
      • Amazon Device Messaging (ADM)
      • Apple Push Notification Service (APNS)
      • Google Cloud Messaging (GCM)
      • Windows Push Notification Service (WNS) for Windows 8+ and Windows Phone 8.1+
      • Microsoft Push Notification Service (MPNS) for Windows Phone 7+
      • Baidu Cloud Push for Android devices in China
  • SQS Queues
    • SNS with SQS provides the ability for messages to be delivered to applications that require immediate notification of an event, and also persist in an SQS queue for other applications to process at a later time
    • SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates.
    • SQS can be used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components, without requiring each component to be concurrently available.
  • SMS Notifications
    • SNS provides the ability to send and receive Short Message Service (SMS) notifications to SMS-enabled mobile phones and smart phones
  • HTTP/HTTPS Endpoints
    • SNS provides the ability to send notification messages to one or more HTTP or HTTPS endpoints.When you subscribe an endpoint to a topic, you can publish a notification to the topic and Amazon SNS sends an HTTP POST request delivering the contents of the notification to the subscribed endpoint
  • Lambda
    • SNS and Lambda are integrated so Lambda functions can be invoked with SNS notifications.
    • When a message is published to an SNS topic that has a Lambda function subscribed to it, the Lambda function is invoked with the payload of the published message
  • Kinesis Data Firehose
    • Deliver events to delivery streams for archiving and analysis purposes.
    • Through delivery streams, events can be delivered to AWS destinations like S3, Redshift, and OpenSearch Service, or to third-party destinations such as Datadog, New Relic, MongoDB, and Splunk.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following notification endpoints or clients does Amazon Simple Notification Service support? Choose 2 answers
    1. Email
    2. CloudFront distribution
    3. File Transfer Protocol
    4. Short Message Service
    5. Simple Network Management Protocol
  2. What happens when you create a topic on Amazon SNS?
    1. The topic is created, and it has the name you specified for it.
    2. An ARN (Amazon Resource Name) is created
    3. You can create a topic on Amazon SQS, not on Amazon SNS.
    4. This question doesn’t make sense.
  3. A user has deployed an application on his private cloud. The user is using his own monitoring tool. He wants to configure that whenever there is an error, the monitoring tool should notify him via SMS. Which of the below mentioned AWS services will help in this scenario?
    1. None because the user infrastructure is in the private cloud/
    2. AWS SNS
    3. AWS SES
    4. AWS SMS
  4. A user wants to make so that whenever the CPU utilization of the AWS EC2 instance is above 90%, the redlight of his bedroom turns on. Which of the below mentioned AWS services is helpful for this purpose?
    1. AWS CloudWatch + AWS SES
    2. AWS CloudWatch + AWS SNS
    3. It is not possible to configure the light with the AWS infrastructure services
    4. AWS CloudWatch and a dedicated software turning on the light
  5. A user is trying to understand AWS SNS. To which of the below mentioned end points is SNS unable to send a notification?
    1. Email JSON
    2. HTTP
    3. AWS SQS
    4. AWS SES
  6. A user is running a webserver on EC2. The user wants to receive the SMS when the EC2 instance utilization is above the threshold limit. Which AWS services should the user configure in this case?
    1. AWS CloudWatch + AWS SES
    2. AWS CloudWatch + AWS SNS
    3. AWS CloudWatch + AWS SQS
    4. AWS EC2 + AWS CloudWatch
  7. A user is planning to host a mobile game on EC2 which sends notifications to active users on either high score or the addition of new features. The user should get this notification when he is online on his mobile device. Which of the below mentioned AWS services can help achieve this functionality?
    1. AWS Simple Notification Service
    2. AWS Simple Queue Service
    3. AWS Mobile Communication Service
    4. AWS Simple Email Service
  8. You are providing AWS consulting service for a company developing a new mobile application that will be leveraging amazon SNS push for push notifications. In order to send direct notification messages to individual devices each device registration identifier or token needs to be registered with SNS, however the developers are not sure of the best way to do this. You advise them to: –
    1. Bulk upload the device tokens contained in a CSV file via the AWS Management Console
    2. Let the push notification service (e.g. Amazon Device messaging) handle the registration
    3. Implement a token vending service to handle the registration
    4. Call the CreatePlatformEndpoint API function to register multiple device tokens. (Refer documentation)
  9. A company is running a batch analysis every hour on their main transactional DB running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible?
    1. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
    2. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
    3. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
    4. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
  10. Which of the following are valid SNS delivery transports? Choose 2 answers.
    1. HTTP
    2. UDP
    3. SMS
    4. DynamoDB
    5. Named Pipes
  11. What is the format of structured notification messages sent by Amazon SNS?
    1. An XML object containing MessageId, UnsubscribeURL, Subject, Message and other values
    2. An JSON object containing MessageId, DuplicateFlag, Message and other values
    3. An XML object containing MessageId, DuplicateFlag, Message and other values
    4. An JSON object containing MessageId, unsubscribeURL, Subject, Message and other values
  12. Which of the following are valid arguments for an SNS Publish request? Choose 3 answers.
    1. TopicAm
    2. Subject
    3. Destination
    4. Format
    5. Message
    6. Language

References

AWS Simple Email Service – SES

AWS Simple Email Service – SES

  • SES is a fully managed service that provides an email platform with an easy, cost-effective way to send and receive email using your own email addresses and domains.
  • can be used to send both transactional and promotional emails securely, and globally at scale.
  • acts as an outbound email server and eliminates the need to support its own software or applications to do the heavy lifting of email transport.
  • acts as an inbound email server to receive emails that can help develop software solutions such as email autoresponders, email unsubscribe systems, and applications that generate customer support tickets from incoming emails.
  • existing email server can also be configured to send outgoing emails through SES with no change in any settings in the email clients
  • Maximum message size including attachments is 10 MB per message (after base64 encoding).
  • integrated with CloudWatch and CloudTrail

SES Characteristics

  • Compatible with SMTP
  • Applications can send email using a single API call in many supported languages Java, .Net, PHP, Perl, Ruby, HTTPS, etc
  • Optimized for the highest levels of uptime, availability, and scales as per the demand
  • Provides sandbox environment for testing
  • provides Reputation dashboard, performance insights, anti-spam feedback
  • provides statistics on email deliveries, bounces, feedback loop results, emails opened, etc.
  • supports DomainKeys Identified Mail (DKIM) and Sender Policy Framework (SPF)
  • supports flexible deployment: shared, dedicated, and customer-owned IPs
  • supports attachments with many popular content formats, including documents, images, audio, and video, and scans every attachment for viruses and malware.
  • integrates with KMS to provide the ability to encrypt the mail that it writes to the S3 bucket.
  • uses client-side encryption to encrypt the mail before it sends the email to  S3.

Sending Limits

  • Production SES has a set of sending limits which include
    • Sending Quota – max number of emails in a 24-hour period
    • Maximum Send Rate – max number of emails per second
  • SES automatically adjusts the limits upward as long as emails are of high quality and they are sent in a controlled manner, as any spike in the email sent might be considered to be spam.
  • Limits can also be raised by submitting a Quota increase request

SES Best Practices

  • Send high-quality and real production content that the recipients want
  • Only send to those who have signed up for the mail
  • Unsubscribe recipients who have not interacted with the business recently
  • Have low bounce and compliant rates and remove bounced or complained addresses, using SNS to monitor bounces and complaints, treating them as an opt-out
  • Monitor the sending activity

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What does Amazon SES stand for?
    1. Simple Elastic Server
    2. Simple Email Service
    3. Software Email Solution
    4. Software Enabled Server
  2. Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are notified via email about order status and any critical issues with their orders such as payment failure. Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders. How can you implement the order fulfillment process while making sure that the emails are delivered reliably? [PROFESSIONAL]
    1. Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.
    2. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 Use the decider instance to send emails to customers.
    3. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.
    4. Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.

References

AWS Certified Advanced Networking – Speciality (ANS-C00) Exam Learning Path

AWS Certified Advanced Networking – Speciality (ANS-C00) Exam Learning Path

I recently cleared the AWS Certified Advanced Networking – Speciality (ANS-C00), which was my first, en route my path to the AWS Speciality certifications. Frankly, I feel the time I gave for preparation was still not enough, but I just about managed to get through. So a word of caution, this exam is inline or tougher than the professional exam especially for the reason that the Networking concepts it covers are not something you can get your hands dirty with easily.

AWS Certified Advanced Networking – Speciality (ANS-C00) exam is the focusing on the AWS Networking concepts. It basically validates

  • Design, develop, and deploy cloud-based solutions using AWS
    Implement core AWS services according to basic architecture best practices
  • Design and maintain network architecture for all AWS services
  • Leverage tools to automate AWS networking tasks

Refer to AWS Certified Advanced Networking – Speciality Exam Guide

AWS Certified Advanced Networking – Speciality (ANS-C00) Exam Resources

AWS Certified Advanced Networking – Speciality (ANS-C00) Exam Summary

  • AWS Certified Advanced Networking – Speciality exam covers a lot of Networking concepts like VPC, VPN, Direct Connect, Route 53, ALB, NLB.
  • One of the key tactic I followed when solving the DevOps Engineer questions was to read the question and use paper and pencil to draw a rough architecture and focus on the areas that you need to improve. Trust me, you will be able eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach to the right answer or atleast have a 50% chance of getting it right.
  • Be sure to cover the following topics
    • Networking & Content Delivery
      • You should know everything in Networking.
      • Understand VPC in depth
      • Virtual Private Network to establish connectivity between on-premises data center and AWS VPC
      • Direct Connect to establish connectivity between on-premises data center and AWS VPC and Public Services
        • Make sure you understand Direct Connect in detail, without this you cannot clear the exam
        • Understand Direct Connect connections – Dedicated and Hosted connections
        • Understand how to create a Direct Connect connection (hint: LOA-CFA provides the details for partner to connect to AWS Direct Connect location)
        • Understand virtual interfaces options – Private Virtual Interface for VPC resources and Public Virtual Interface for Public resources
        • Understand setup Private and Public VIF
        • Understand Route Propagation, propagation priority, BGP connectivity
        • Understand High Availability options based on cost and time i.e. Second Direct Connect connection OR VPN connection
        • Understand Direct Connect Gateway – it provides a way to connect to multiple VPCs from on-premises data center using the same Direct Connect connection
      • Route 53
        • Understand Route 53 and Routing Policies and their use cases Focus on Weighted, Latency routing policies
        • Understand Route 53 Split View DNS to have the same DNS to access a site externally and internally
      • Understand CloudFront and use cases
      • Load Balancer
        • Understand ELB, ALB and NLB 
        • Understand the difference ELB, ALB and NLB esp. ALB provides Content, Host and Path based Routing while NLB provides the ability to have static IP address
        • Know how to design VPC CIDR block with NLB (Hint – minimum number of IPs required are 8)
        • Know how to pass original Client IP to the backend instances (Hint – X-Forwarded-for and Proxy Protocol)
      • Know WorkSpaces requirements and setup
    • Security
      • Know AWS GuardDuty as managed threat detection service
      • Know AWS Shield esp. the Shield Advanced option and the features it provides
      • Know WAF as Web Traffic Firewall – (Hint – WAF can be attached to your CloudFront, Application Load Balancer, API Gateway to dynamically detect and prevent attacks)
    •  

AWS Services Overview – Whitepaper – Certification

AWS Services Overview

AWS consists of many cloud services that can be use in combinations tailored to meet business or organizational needs. This section introduces the major AWS services by category.


NOTE – This post provides a brief overview of AWS services. Its is good introduction to start all certifications. However, It is more relevant and most important for AWS Cloud Practitioner Certification Exam.


Common Features

  • Almost the features can be access control through AWS Identity Access Management – IAM
  • Services managed by AWS are all made Scalable and Highly Available, without any changes needed from the user

AWS Access

AWS allows accessing its services through unified tools using

  • AWS Management Console – a simple and intuitive user interface
  • AWS Command Line Interface (CLI) – programatic access through scripts
  • AWS Software Development Kits (SDKs) – programatic access through Application Program Interface (API) tailored for programming language (Java, .NET, Node.js, PHP, Python, Ruby, Go, C++, AWS Mobile SDK) or platform (Android, Browser, iOS)

Security, Identity, and Compliance

Amazon Cloud Directory

  • enables building flexible, cloud-native directories for organizing hierarchies of data along multiple dimensions, whereas traditional directory solutions limit to a single directory
  • helps create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries.

AWS Identity and Access Management

  • enables you to securely control access to AWS services and resources for the users.
  • allows creation of AWS users, groups and roles, and use permissions to allow and deny their access to AWS resources
  • helps manage IAM users and their access with individual security credentials like access keys, passwords, and multi-factor authentication devices, or request temporary security credentials to provide users
  • helps role creation & manage permissions to control which operations can be performed by the which entity, or AWS service, that assumes the role
  • enables identity federation to allow existing identities (users, groups, and roles) in the enterprise to access AWS Management Console, call AWS APIs, access resources, without the need to create an IAM user for each identity.

Amazon Inspector

  • is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.
  • automatically assesses applications for vulnerabilities or deviations from best practices
  • produces a detailed list of security findings prioritized by level of severity.

AWS Certificate Manager

  • helps provision, manage, and deploy Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services like ELB
  • removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates.

AWS CloudHSM

  • helps meet corporate, contractual, and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) appliances within the AWS Cloud.
  • allows protection of encryption keys within HSMs, designed and validated to government standards for secure key management.
  • helps comply with strict key management requirements without sacrificing application performance.

AWS Directory Service

  • provides Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD, that enables directory-aware workloads and AWS resources to use managed Active Directory in the AWS Cloud.

AWS Key Management Service

  • is a managed service that makes it easy to create and control the encryption keys used to encrypt your data.
  • uses HSMs to protect the security of your keys.

AWS Organizations

  • allows creation of AWS accounts groups, to more easily manage security and automation settings collectively
  • helps centrally manage multiple accounts to help scale.
  • helps to control which AWS services are available to individual accounts, automate new account creation, and simplify billing.

AWS Shield

  • is a managed Distributed Denial of Service (DDoS) protection service that safeguards web applications running on AWS.
  • provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
  • provides two tiers of AWS Shield: Standard and Advanced.

AWS WAF

  • is a web application firewall that helps protect web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.
  • gives complete control over which traffic to allow or block to web application by defining customizable web security rules.

AWS Compute Services

Amazon Elastic Compute Cloud (EC2)

  • provides secure, resizable compute capacity
  • provide complete control of the computing resources (root access, ability to start, stop, terminate instances etc.)
  • reduces the time required to obtain and boot new instances to minutes
  • allows quick scaling of capacity, both up and down, as the computing requirements changes
  • provides developers and sysadmins tools to build failure resilient applications and isolate themselves from common failure scenarios.
  • Benefits
    • Elastic Web-Scale Computing
      • enables scaling to increase or decrease capacity within minutes, not hours or days.
    • Flexible Cloud Hosting Services
      • flexibility to choose from multiple instance types, operating systems, and software packages.
      • selection of memory configuration, CPU, instance storage, and boot partition size
    • Reliable
      • offers a highly reliable environment where replacement instances can be rapidly and predictably commissioned.
      • runs within AWS’s proven network infrastructure and data centers.
      • EC2 Service Level Agreement (SLA) commitment is 99.95% availability for each Region.
    • Secure
      • works in conjunction with VPC to provide security and robust networking functionality for your compute resources.
      • allows control of IP address, exposure to Internet (using subnets), inbound and outbound access (using Security groups and NACLs)
      • existing IT infrastructure can be connected to the resources in the VPC using industry-standard encrypted IPsec virtual private network (VPN) connections
    • Inexpensive – pay only for the capacity actually used
  • EC2 Purchasing Options and Types
    • On-Demand Instances
      • pay for compute capacity by the hour with no long-term commitments
      • enables to increase or decrease compute capacity depending on the demands and only pay the specified hourly rate for used instances
      • frees from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs.
      • also helps remove the need to buy “safety net” capacity to handle periodic traffic spikes.
    • Reserved Instances
      • provides significant discount (up to 75%) compared to On-Demand instance pricing.
      • provides flexibility to change families, operating system types, and tenancies with Convertible Reserved Instances.
    • Spot Instances
      • allow you to bid on spare EC2 computing capacity.
      • are often available at a discount compared to On-Demand pricing, helping reduce the application cost, grow it’s compute capacity and throughput for the same budget
    • Dedicated Instances – that run on hardware dedicated to a single customer for additional isolation.
    • Dedicated Hosts
      • are physical servers with EC2 instance capacity fully dedicated to your use.
      • can help you address compliance requirements and reduce costs by allowing you to use your existing server-bound software licenses.

Amazon EC2 Container Service

  • is a highly scalable, high-performance container management service that supports Docker containers.
  • allows running applications on a managed cluster of EC2 instances
  • eliminates the need to install, operate, and scale cluster management infrastructure.
  • can use to schedule the placement of containers across the cluster based on the resource needs and availability requirements.
  • custom scheduler or third-party schedulers can be integrated to meet business or application-specific requirements.

Amazon EC2 Container Registry

  • is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.
  • is integrated with Amazon EC2 Container Service (ECS), simplifying development to production workflow.
  • eliminates the need to operate container repositories or worry about scaling the underlying infrastructure.
  • hosts images in a highly available and scalable architecture
  • pay only for the amount of data stored and data transferred to the Internet.

Amazon Lightsail

  • is designed to be the easiest way to launch and manage a virtual private server with AWS.
  • plans include everything needed to jumpstart a project – a virtual machine, SSD-based storage, data transfer, DNS management, and a static IP address- for a low, predictable price.

AWS Batch

  • enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS.
  • dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory-optimized instances) based on the volume and specific resource requirements of the batch jobs submitted.
  • plans, schedules, and executes the batch computing workloads across the full range of AWS compute services and features

AWS Elastic Beanstalk

  • is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and Internet Information Services (IIS)
  • automatically handles the deployment, from capacity provisioning, load balancing, and auto scaling to application health monitoring.
  • provides full control over the AWS resources with access to the underlying resources at any time.

AWS Lambda

  • enables running code without zero administration, provisioning or managing servers, and scaling for high availability
  • pay only for the compute time consumed – there is no charge when the code is not running
  • can be setup to be automatically triggered from other AWS services, or called it directly from any web or mobile app.

Auto Scaling

  • helps maintain application availability
  • allows scaling EC2 capacity up or down automatically according to defined conditions or demand spikes to reduce cost
  • helps ensure desired number of EC2 instances are running always
  • well suited both to applications that have stable demand patterns and applications that experience hourly, daily, or weekly variability in usage.

Storage

Simple Storage Service

  • is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web.
  • S3 Features
    • Durable
      • designed for durability of 99.999999999% of objects
      • data is redundantly stored across multiple facilities and multiple devices in each facility.
    • Available – designed for up to 99.99% availability (standard) of objects over a given year and is backed by the S3 Service Level Agreement
    • Scalable – can help store virtually unlimited data
    • Secure
      • supports data in motion over SSL and data at rest encryption
      • bucket policies and IAM can help manage object permissions and control access to the data
    • Low Cost
      • provides storage at a very low cost.
      • using lifecycle policies, the data can be automatically tiered into lower cost, longer-term cloud storage classes like S3 Standard – Infrequent Access and Glacier for archiving.

Elastic Block Store (EBS)

  • provides persistent block storage volumes for use with EC2 instance
  • offers the consistent and low-latency performance needed to run workloads.
  • allows scaling up or down within minutes – all while paying a low price for only what is provisioned
  • EBS Features
    • High Performance Volumes – Choose between SSD backed or HDD backed volumes to deliver the performance needed
    • Availability
      • is designed for 99.999% availability
      • automatically replicates within its Availability Zone to protect from component failure, offering high availability and durability.
    • Encryption – provides seamless support for data-at-rest and data-in-transit between EC2 instances and EBS volumes.
    • Snapshots – protect data by creating point-in-time snapshots of EBS volumes, which are backed up to S3 for long-term durability.

Elastic File System (EFS)

  • provides simple, scalable file storage for use with EC2 instances
  • storage capacity is elastic, growing and shrinking automatically as files are added and removed
  • provides a standard file system interface and file system access semantics, when mounted on EC2 instances
  • works in shared mode, where multiple EC2 instances can access an EFS file system at the same time, allowing EFS to provide a common data
    source for workloads and applications running on more than one EC2 instance.
  • can be mounted on on-premises data center servers when connected to the VPC with AWS Direct Connect.
  • can be mounted on on-premises servers to migrate data sets to EFS, enable cloud bursting scenarios, or backup on-premises data to EFS.
  • is designed for high availability and durability, and provides performance for a broad spectrum of workloads and applications, including big data and analytics, media processing workflows, content management, web serving, and home directories.

Glacier

  • provides secure, durable, and extremely low-cost storage service for data archiving and long-term backup
  • To keep costs low yet suitable for varying retrieval needs, Glacier provides three options for access to archives, from a few minutes to several hours.

AWS Storage Gateway

  • seamlessly enables hybrid storage between on-premises storage environments and the AWS Cloud
  • combines a multi-protocol storage appliance with highly efficient network connectivity to AWS cloud storage services, delivering local
    performance with virtually unlimited scale.
  • use it in remote offices and data centers for hybrid cloud workloads involving migration, bursting, and storage tiering

Databases

Aurora

  • is a MySQL and PostgreSQL compatible relational database engine
  • provides the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases.
  • Benefits
    • Highly Secure
      • provides multiple levels of security, including
        • network isolation using VPC
        • encryption at rest using keys created and controlled through AWS Key Management Service (KMS), and
        • encryption of data in transit using SSL.
      • with an an encrypted Aurora instance, automated backups, snapshots, and replicas are also encrypted
    • Highly Scalable – automatically grows storage as needed
    • High Availability and Durability
      • designed to offer greater than 99.99% availability
      • recovery from physical storage failures is transparent, and instance failover typically requires less than 30 seconds
      • is fault-tolerant and self-healing. Six copies of the data are replicated across three AZs and continuously backed up to S3.
      • automatically and continuously monitors and backs up your database to S3, enabling granular point-in-time recovery.
    • Fully Managed – is a fully managed database service, and database management tasks such as hardware provisioning, software patching, setup, configuration, monitoring, or backups is taken care of

Relational Database Service (RDS)

  • makes it easy to set up, operate, and scale a relational database
  • provides cost-efficient and resizable capacity while managing time-consuming database administration tasks
  • supports various, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, and Microsoft SQL Server
  • Benefits
    • Fast and Easy to Administer – No need for infrastructure provisioning, and no need for installing and maintaining database software.
    • Highly Scalable
      • allows quick and easy scaling of database’s compute and storage resources, often with no downtime.
      • allows offloading read traffic from primary database using Read Replicas, for few RDS engine types
    • Available and Durable
      • runs on the same highly reliable infrastructure
      • allows Multi-AZ DB instance, where RDS synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
      • enhances reliability for critical production databases, by enabling automated backups, database snapshots, and automatic host replacement.
    • Secure
      • provides multiple levels of security, including
        • network isolation using VPC
        • connect to on-premises existing IT infrastructure through an industry-standard encrypted IPsec VPN
        • encryption at rest using keys created and controlled through AWS Key Management Service (KMS), and
        • offer encryption at rest and encryption in transit.
      • with an an encrypted instance, automated backups, snapshots, and replicas are also encrypted
    • Inexpensive – pay very low rates and only for the consumed resources, while taking advantage of on-demand and reserved instance types

DynamoDB

  • fully managed, fast and flexible NoSQL database service for applications that need consistent, single-digit millisecond latency at any scale.
  • supports both document and key-value data models.
  • flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, Internet of Things (IoT), and other applications
  • Benefits
    • Fast, Consistent Performance
      • designed to deliver consistent, fast performance at any scale
      • uses automatic partitioning and SSD technologies to meet throughput requirements and deliver low latencies at any scale.
    • Highly Scalable – it manages all the scaling to achieve the specified throughput capacity requirements
    • Event-Driven Programming – integrates with AWS Lambda to provide Triggers that enable architecting applications that automatically react to data changes.

ElastiCache

  • is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud.
  • helps improves the performance of web applications by caching results and allowing to retrieve information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases.
  • supports two open-source in-memory caching engines: Redis and Memcached

Migration

AWS Application Discovery Service

  • helps systems integrators quickly and reliably plan application migration projects by automatically identifying applications running in on-premises
    data centers, their associated dependencies, and performance profiles
  • automatically collects configuration and usage data from servers, storage, and networking equipment to develop a list of applications, how they
    perform, and how they are interdependent
  • information is retained in encrypted format in an AWS Application Discovery Service database, which you can export as a CSV or XML file into your preferred visualization tool or cloud migration solution to help reduce the complexity and time in planning your cloud migration.

AWS Database Migration Service

  • helps migrate databases to AWS easily and securely
  • source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
  • supports homogenous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL.
  • allows streaming of data to Redshift from any of the supported sources including Aurora, PostgreSQL, MySQL, MariaDB, Oracle, SAP ASE, and SQL Server, enabling consolidation and easy analysis of data in the petabyte-scale data warehouse
  • can also be used for continuous data replication with high availability.

AWS Server Migration Service

  • is an agentless service which makes it easier and faster to migrate thousands of on-premises workloads to AWS

Snowball

  • is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS.
  • addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns.
  • uses multiple layers of security designed to protect the data including tamper resistant enclosures, 256-bit encryption, and an industry-standard Trusted Platform Module (TPM) designed to ensure both security and full chain of custody of your data.
  • performs a software erasure of the Snowball appliance, once the data transfer job has been processed

Snowball Edge

  • is a 100 TB data transfer device with on-board storage and compute capabilities.
  • can be used to move large amounts of data into and out of AWS, as a temporary storage tier for large local datasets, or to support local workloads in remote or offline locations.
  • multiple devices can be clustered together to form a local storage tier and process the data on-premises, helping ensure the applications continue to run even when they are not able to access the cloud

Snowmobile

  • is an exabyte-scale data transfer service used to move extremely large amounts of data to AWS.
  • provides secure, fast, and cost effective transfer of data
  • data cane be imported into S3 or Glacier, once data loaded
  • uses multiple layers of security designed to protect the data including dedicated security personnel, GPS tracking, alarm monitoring, 24/7 video surveillance, and an optional escort security vehicle while in transit.
  • all data is encrypted with 256-bit encryption keys managed through KMS and designed to ensure both security and full chain of custody of the data

Networking and Content Delivery

Virtual Private Cloud (VPC)

  • helps provision a logically isolated section of the AWS Cloud where AWS resources can be launched in a virtual network that you define
  • provides complete control over the virtual networking environment, including selection of IP address range, creation of subnets (public and private), and configuration of route tables and network gateways.
  • allows use of both IPv4 and IPv6 for secure and easy access to resources and applications
  • allows multiple layers of security, including security groups and network access control lists, to help control access resources
  • allows creation of a hardware virtual private network (VPN) connection between the corporate data center and VPC and leverage the AWS Cloud as an extension of corporate data center.

CloudFront

  • is a global content delivery network (CDN) service that accelerates delivery of websites, APIs, video content, or other web assets.
  • can be used to deliver entire website, including dynamic, static, streaming, and interactive content using a global network of edge locations.
  • allows requests for the content to be automatically routed to the nearest edge location, so content is delivered with the best possible performance.
  • is optimized to work with other services in AWS, such as S3, EC2, ELB, and Route 53 as well as with any non-AWS origin server that stores the original, definitive versions of your files.

Route 53

  • is a highly available and scalable Domain Name System (DNS) web service
  • effectively connects user requests to infrastructure running in AWS – such as EC2 instances, ELB, or S3 buckets—and can also be used to route users to infrastructure outside of AWS.
  • helps configure DNS health checks to route traffic to healthy endpoints or to independently monitor the health of your application and its endpoints.
  • allows traffic management globally through a variety of routing types, including latency-based routing, Geo DNS, and weighted round robin – all of which can be combined with DNS Failover in order to enable a variety of low-latency, fault-tolerant architectures.
  • is fully compliant with IPv6 as well
  • offers Domain Name Registration service

Direct Connect

  • makes it easy to establish a dedicated network connection with on- premises to AWS
  • helps establish private connectivity between AWS and data center, office, or co-location environment,
  • helps increase bandwidth throughput, reduce network costs, , and provide a more consistent network experience than Internet-based connections

Elastic Load Balancing (ELB)

  • automatically distributes incoming application traffic across multiple EC2 instances
  • enables achieve greater levels of fault tolerance by seamlessly providing the required amount of load balancing capacity needed to distribute application traffic.
  • offers two types of load balancers that both feature high availability, automatic scaling, and robust security.
    • Classic Load Balancer
      • routes traffic based on either application or network level information
      • ideal for simple load balancing of traffic across multiple EC2 instances
    • Application Load Balancer
      • routes traffic based on advanced application-level information that includes the content of the request
      • ideal for applications needing advanced routing capabilities, microservices, and container-based architectures.
      • offers the ability to route traffic to multiple services or load balance
        across multiple ports on the same EC2 instance.

Management Tools

AWS CloudWatch

  • is a monitoring and logging service for AWS Cloud resources and the applications running on AWS.
  • can be used to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in the AWS resources.

AWS CloudFormation

  • allows developers and systems administrators to implement “Infrastructure as Code”
  • provides an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion
  • handles the order for provisioning AWS services or the subtleties of making those dependencies work.
  • allows applying version control to the AWS infrastructure the same way its done with software

AWS CloudTrail

  • helps records AWS API calls for the account and delivers log files
  • including API calls made using the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS CloudFormation),
  • recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.
  • enables security analysis, resource change tracking, compliance auditing

AWS Config

  • provides an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance
  • provides Config Rules feature, that enables rules creation that automatically check the configuration of AWS resources
  • helps discover existing and deleted AWS resources, determine overall compliance against rules, and dive into configuration details of a resource at any point in time.
  • enables compliance auditing, security analysis, resource change tracking, and troubleshooting.

AWS OpsWorks

  • configuration management service that uses Chef, an automation platform that treats server configurations as code.
  • uses Chef to automate how servers are configured, deployed, and managed across the EC2 instances or on-premises compute environments.
  • has two offerings, OpsWorks for Chef Automate and OpsWorks Stacks

AWS Service Catalog

  • allows organizations to create and manage catalogs of IT services that are approved for use on AWS.
  • helps centrally manage commonly deployed IT services and helps to achieve consistent governance and meet compliance requirements, while enabling users to quickly deploy only approved IT services they need
  • can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.

AWS Trusted Advisor

  • is an online resource to help reduce cost, increase performance, and improve security by optimizing the AWS environment.
  • provides real-time guidance to help provision the resources following AWS best practices.

AWS Personal Health Dashboard

  • provides alerts and remediation guidance when AWS is experiencing events that might affect you.
  • displays relevant and timely information to help you manage events in progress, and provides proactive notification to help you plan for scheduled activities.
  • alerts are automatically triggered by changes in the health of AWS resources, providing event visibility and guidance to help quickly diagnose and resolve issues.
  • provides a personalized view into the performance and availability of the AWS services underlying the AWS resources.
  • Service Health Dashboard displays the general status of AWS services,

AWS Managed Services

  • provides ongoing management of the AWS infrastructure so the focus can be on applications.
  • helps reduce the operational overhead and risk, by implementing best practices to maintain the infrastructure
  • automates common activities such as change requests, monitoring, patch management, security, and backup services, and provides full-lifecycle services to provision, run, and support the infrastructure.
  • improves agility, reduces cost, and unburdens from infrastructure operations

Developer Tools

AWS CodeCommit

  • is a fully managed source control service that makes to host secure and highly scalable private Git repositories

AWS CodeBuild

  • is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy
  • also helps provision, manage, and scale the build servers.
  • scales continuously and processes multiple builds concurrently, so the builds are not left waiting in a queue.

AWS CodeDeploy

  • is a service that automates code deployments to any instance, including EC2 instances and instances running on premises.
  • helps to rapidly release new features, avoid downtime during application deployment, and handles the complexity of updating the applications.

AWS CodePipeline

  • is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates.
  • builds, tests, and deploys the code every time there is a code change, based on the defined release process models

AWS X-Ray

  • helps developers analyze and debug distributed applications in production or development, such as those built using a microservices architecture
  • provides an end-to-end view of requests as they travel through the application, and shows a map of its underlying components.
  • helps understand how the application and its underlying services are performing, to identify and troubleshoot the root cause of performance issues and errors.

Messaging

Amazon SQS

  • is a fast, reliable, scalable, fully managed message queuing service.
  • makes it simple and cost-effective to decouple the components of a cloud application.
  • includes standard queues with high throughput and at-least-once processing, and FIFO queues
  • provides FIFO (first-in, first-out) delivery and exactly-once processing.

Amazon SNS

  • fast, flexible, fully managed push notification service to send individual messages or to fan-out messages to large numbers of recipients.
  • makes it simple and cost effective to send push notifications to mobile device users, email recipients or even send messages to other distributed services
  • notifications can be sent to Apple, Google, Fire OS, and Windows devices, as well as to Android devices in China with Baidu Cloud Push.
  • can also deliver messages to SQS, Lambda functions, or HTTP endpoint

Amazon SES

  • is a cost-effective email service built on the reliable and scalable infrastructure that Amazon.com developed to serve its own customer
  • can send transactional email, marketing messages, or any other type of high-quality content to the customers.
  • can receive messages and deliver them to an S3 bucket, call your custom code via an AWS Lambda function, or publish notifications to SNS.

Analytics

Amazon Athena

  • is an interactive query service that helps to analyze data in S3 using standard SQL.
  • is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
  • removes the need for complex extract, transform, and load (ETL) jobs

Amazon EMR

  • provides a managed Hadoop framework that makes it easy, fast, and costeffective to process vast amounts of data across dynamically scalable EC2 instances.
  • enables you to run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink, and interact with data in other AWS data stores such as S3 and DynamoDB.
  • securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.

Amazon CloudSearch

  • is a managed service and makes it simple and costeffective to set up, manage, and scale a search solution for website or application.
  • supports 34 languages and popular search features such as highlighting, autocomplete, and geospatial search.

Amazon Elasticsearch Service

  • makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more.
  • is a fully managed service that delivers Elasticsearch’s easy-to-use APIs and real-time capabilities along with the availability, scalability, and security required by production workloads.

Amazon Kinesis

  • is a platform for streaming data on AWS, offering powerful services to make it easy to load and analyze streaming data,
  • provides the ability to build custom streaming data applications for specialized needs.
  • offers three services:
    • Amazon Kinesis Firehose,
      • helps load streaming data into AWS.
      • can capture, transform, and load streaming data into Amazon Kinesis Analytics, S3, Redshift, and Elasticsearch Service, enabling near real-time analytics with existing business intelligence tools and dashboards
      • helps batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
    • Amazon Kinesis Analytics
      • helps process streaming data in real time with standard SQL
    • Amazon Kinesis Streams
      • enables you to build custom applications that process or analyze streaming data for specialized needs.

Amazon Redshift

  • provides a fast, fully managed, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all your data using your existing business intelligence tools.
  • has a massively parallel processing (MPP) data warehouse architecture, parallelizing and distributing SQL operations to take advantage of all available resources.
  • provides underlying hardware designed for high performance data processing, using local attached storage to maximize throughput between the CPUs and drives, and a 10GigE mesh network to maximize throughput between nodes.

Amazon QuickSight

  • provides fast, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data.

AWS Data Pipeline

  • helps reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals
  • can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to AWS services such as S3, RDS, DynamoDB, and EMR.
  • helps create complex data processing workloads that are fault tolerant, repeatable, and highly available.
  • also allows you to move and process data that was previously locked up in on-premises data silos.

AWS Glue

  • is a fully managed ETL service that makes it easy to move data between data stores.
  • helps simplifies and automates the difficult and time-consuming tasks of data discovery, conversion, mapping, and job scheduling.
  • helps schedules ETL jobs and provisions and scales all the infrastructure
  • required so that ETL jobs run quickly and efficiently at any scale.

Application Services

AWS Step Functions

  • makes it easy to coordinate the components of distributed applications and microservices using visual workflows.
  • automatically triggers and tracks each step, and retries when there are errors, so the application executes in order and as expected.

Amazon API Gateway

  • is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
  • handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.

Amazon Elastic Transcoder

  • is media transcoding in the cloud
  • is designed to be a highly scalable, easy-to-use, and cost-effective way for developers and businesses to convert (or transcode) media files from their source format into versions that will play back on devices like smartphones, tablets, and PCs.

Amazon SWF

  • helps developers build, run, and scale background jobs that have parallel or sequential steps.
  • is a fully-managed state tracker and task coordinator in the cloud.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which AWS services belong to the Compute services? Choose 2 answers
    1. Lambda
    2. EC2
    3. S3
    4. EMR
    5. CloudFront
  2. Which AWS service provides low cost storage option for archival and long-term backup?
    1. Glacier
    2. S3
    3. EBS
    4. CloudFront
  3. Which AWS services belong to the Storage services? Choose 2 answers
    1. EFS
    2. IAM
    3. EMR
    4. S3
    5. CloudFront
  4. A Company allows users to upload videos on its platform. They want to convert the videos to multiple formats supported on multiple devices and platforms. Which AWS service can they leverage for the requirement?
    1. AWS SWF
    2. AWS Video Converter
    3. AWS Elastic Transcoder
    4. AWS Data Pipeline
  5. Which analytic service helps analyze data in S3 using standard SQL?
    1. Athena
    2. EMR
    3. Elasticsearch
    4. Kinesis
  6. What features does AWS’s Route 53 service provide? Choose the 2 correct answers:
    1. Content Caching
    2. Domain Name System (DNS) service
    3. Database Management
    4. Domain Registration
  7. You are trying to organize and import (to AWS) gigabytes of data that are currently structured in JSON-like, name-value documents. What AWS service would best fit your needs?
    1. Lambda
    2. DynamoDB
    3. RDS
    4. Aurora
  8. What AWS database is primarily used to analyze data using standard SQL formatting with compatibility for your existing business intelligence tools? Choose the correct answer:
    1. Redshift
    2. RDS
    3. DynamoDB
    4. ElastiCache
  9. A company wants their application to use pre-configured machine image with software installed and configured. which AWS feature can help for the same?
    1. Amazon Machine Image
    2. AWS CloudFormation
    3. AWS Lambda
    4. AWS Lightsail
  10. What AWS service can be used for track API event calls for security analysis, resource change tracking?
    1. AWS CloudWatch
    2. AWS CloudFormation
    3. AWS CloudTrail
    4. AWS OpsWorks
  11. Which AWS service can help Offload the read traffic from your database in order to reduce latency caused by read-heavy workload?
    1. ElastiCache
    2. DynamoDB
    3. S3
    4. EFS
  12. What service allows system administrators to run “Infrastructure as code”?
    1. CloudFormation
    2. CloudWatch
    3. CloudTrail
    4. CodeDeploy

References

AWS_Overview_Whitepaper

AWS Support Plans

AWS Support Plans

AWS provides 4 AWS support plans with additional features with extra costs. The plans are in order of features and the features for lower support plans are available for higher one and not repeated.

NOTE – This post is more relevant for AWS Cloud Practitioner Certification

Basic

Developer

  • Business hours access to Cloud Support Associates via email
  • One primary contact can open Unlimited cases
  • Case Severity/Response times SLA (is in business hours)
    • General guidance < 24 business hours
    • System impaired < 12 business hours
  • General Guidance on Architecture support

Business

  • 24×7 access to Cloud Support Engineers via email, chat & phone
  • Access to Personal Health Dashboard Health API
  • Access to full set of Trusted Advisor checks
  • Allows Unlimited contacts/Unlimited cases (IAM supported) to open cases
  • Case Severity/Response times SLA (is in hours)
    • General guidance < 24 hours
    • System impaired < 12 hours
    • Production system impaired < 4 hours
    • Production system down < 1 hour

Enterprise

  • 24×7 access to Sr. Cloud Support Engineers via email, chat & phone
  • Architecture support with Consultative review and guidance based on your applications
  • Access to a Well-Architected Review delivered by AWS Solution Architects
  • Operations Support for Operational reviews, recommendations, and reporting
  • Access to online self-paced labs
  • Account Assistance by Assigned Support Concierge
  • Proactive Guidance by Designated Technical Account Manager
  • Case Severity/Response times SLA
    • Business-critical system down < 15 minutes

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which AWS support plan has a dedicated technical account manager assigned for proactive guidance?
    1. AWS Basic support plan
    2. AWS Developer support plan
    3. AWS Business support plan
    4. AWS Enterprise support plan
  2. Which feature is available for all the AWS support plans?
    1. Technical Account Manager
    2. Assigned Support Concierge
    3. 24×7 access to customer service
    4. Access to Cloud Support resources

References

AWS_Support_Plans

Architecting for the Cloud – AWS Best Practices – Whitepaper – Certification

Architecting for the Cloud – AWS Best Practices

Architecting for the Cloud – AWS Best Practices whitepaper provides architectural patterns and advice on how to design systems that are secure, reliable, high performing, and cost efficient

AWS Design Principles

Scalability

  • While AWS provides virtually unlimited on-demand capacity, the architecture should be designed to take advantage of those resources
  • There are two ways to scale an IT architecture
    • Vertical Scaling
      • takes place through increasing specifications of an individual resource for e.g. updating EC2 instance type with increasing RAM, CPU, IOPS, or networking capabilities
      • will eventually hit a limit, and is not always a cost effective or highly available approach
    • Horizontal Scaling
      • takes place through increasing number of resources for e.g. adding more EC2 instances or EBS volumes
      • can help leverage the elasticity of cloud computing
      • not all the architectures can be designed to distribute their workload to multiple resources
      • applications designed should be stateless,
        • that needs no knowledge of previous interactions and stores no session information
        • capacity can be increased and decreased, after running tasks have been drained
      • State, if needed, can be implemented using
        • Low latency external store, for e.g. DynamoDB, Redis, to maintain state information
        • Session affinity, for e.g. ELB sticky sessions, to bind all the transactions of a session to a specific compute resource. However, it cannot be guaranteed or take advantage of newly added resources for existing sessions
      • Load can be distributed across multiple resources using
        • Push model, for e.g. through ELB where it distributes the load across multiple EC2 instances
        • Pull model, for e.g. through SQS or Kinesis where multiple consumers subscribe and consume
      • Distributed processing, for e.g. using EMR or Kinesis, helps process large amounts of data by dividing task and its data into many small fragments of works

Disposable Resources Instead of Fixed Servers

  • Resources need to be treated as temporary disposable resources rather than fixed permanent on-premises resources before
  • AWS focuses on the concept of Immutable infrastructure
    • servers once launched, is never updated throughout its lifetime.
    • updates can be performed on a new server with latest configurations,
    • this ensures resources are always in a consistent (and tested) state and easier rollbacks
  • AWS provides multiple ways to instantiate compute resources in an automated and repeatable way
    • Bootstraping
      • scripts to configure and setup for e.g. using data scripts and cloud-init to install software or copy resources and code
    • Golden Images
      • a snapshot of a particular state of that resource,
      • faster start times and removes dependencies to configuration services or third-party repositories
    • Containers
      • AWS support for docker images through Elastic Beanstalk and ECS
      • Docker allows packaging a piece of software in a Docker Image, which is a standardized unit for software development, containing everything the software needs to run: code, runtime, system tools, system libraries, etc
  • Infrastructure as Code
    • AWS assets are programmable, techniques, practices, and tools from software development can be applied to make the whole infrastructure reusable, maintainable, extensible, and testable.
    • AWS provides services like CloudFormation, OpsWorks for deployment

Automation

  • AWS provides various automation tools and services which help improve system’s stability, efficiency and time to market.
    • Elastic Beanstalk
      • a PaaS that allows quick application deployment while handling resource provisioning, load balancing, auto scaling, monitoring etc
    • EC2 Auto Recovery
      • creates CloudWatch alarm that monitors an EC2 instance and automatically recovers it if it becomes impaired.
      • A recovered instance is identical to the original instance, including the instance ID, private & Elastic IP addresses, and all instance metadata.
      • Instance is migrated through reboot, in memory contents are lost.
    • Auto Scaling
      • allows maintain application availability and scale the capacity up or down automatically as per defined conditions
    • CloudWatch Alarms
      • allows SNS triggers to be configured when a particular metric goes beyond a specified threshold for a specified number of periods
    • CloudWatch Events
      • allows real-time stream of system events that describe changes in AWS resources
    • OpsWorks
      • allows continuous configuration through lifecycle events that automatically update the instances’ configuration to adapt to environment changes.
      • Events can be used to trigger Chef recipes on each instance to perform specific configuration tasks
    • Lambda Scheduled Events
      • allows Lambda function creation and direct AWS Lambda to execute it on a regular schedule.

Loose Coupling

  • AWS helps loose coupled architecture that reduces interdependencies, a change or failure in a component does not cascade to other components
    • Asynchronous Integration
      • does not involve direct point-to-point interaction but usually through an intermediate durable storage layer for e.g. SQS, Kinesis
      • decouples the components and introduces additional resiliency
      • suitable for any interaction that doesn’t need an immediate response and an ack that a request has been registered will suffice
    • Service Discovery
      • allows new resources to be launched or terminated at any point in time and discovered as well for e.g. using ELB as a single point of contact with hiding the underlying instance details or Route 53 zones to abstract load balancer’s endpoint
    • Well-Defined Interfaces
      • allows various components to interact with each other through specific, technology agnostic interfaces for e.g. RESTful apis with API Gateway 

Services, Not Servers

Databases

  • AWS provides different categories of database technologies
    • Relational Databases (RDS)
      • normalizes data into well-defined tabular structures known as tables, which consist of rows and columns
      • provide a powerful query language, flexible indexing capabilities, strong integrity controls, and the ability to combine data from multiple tables in a fast and efficient manner
      • allows vertical scalability by increasing resources and horizontal scalability using Read Replicas for read capacity and sharding or data partitioning for write capacity
      • provides High Availability using Multi-AZ deployment, where data is synchronously replicated
    • NoSQL Databases (DynamoDB)
      • provides databases that trade some of the query and transaction capabilities of relational databases for a more flexible data model that seamlessly scales horizontally
      • perform data partitioning and replication to scale both the reads and writes in a horizontal fashion
      • DynamoDB service synchronously replicates data across three facilities in an AWS region to provide fault tolerance in the event of a server failure or Availability Zone disruption
    • Data Warehouse (Redshift)
      • Specialized type of relational database, optimized for analysis and reporting of large amounts of data
      • Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing (MPP), columnar data storage, and targeted data compression encoding schemes
      • Redshift MPP architecture enables increasing performance by increasing the number of nodes in the data warehouse cluster
  • For more details refer to AWS Storage Options Whitepaper

Removing Single Points of Failure

  • AWS provides ways to implement redundancy, automate recovery and reduce disruption at every layer of the architecture
  • AWS supports redundancy in the following ways
    • Standby Redundancy
      • When a resource fails, functionality is recovered on a secondary resource using a process called failover.
      • Failover will typically require some time before it completes, and during that period the resource remains unavailable.
      • Secondary resource can either be launched automatically only when needed (to reduce cost), or it can be already running idle (to accelerate failover and minimize disruption).
      • Standby redundancy is often used for stateful components such as relational databases.
    • Active Redundancy
      • requests are distributed to multiple redundant compute resources, if one fails, the rest can simply absorb a larger share of the workload.
      • Compared to standby redundancy, it can achieve better utilization and affect a smaller population when there is a failure.
  • AWS supports replication
    • Synchronous replication
      • acknowledges a transaction after it has been durably stored in both the primary location and its replicas.
      • protects data integrity from the event of a primary node failure
      • used to scale read capacity for queries that require the most up-to-date data (strong consistency).
      • compromises performance and availability
    • Asynchronous replication
      • decouples the primary node from its replicas at the expense of introducing replication lag
      • used to horizontally scale the system’s read capacity for queries that can tolerate that replication lag.
    • Quorum-based replication
      • combines synchronous and asynchronous replication to overcome the challenges of large-scale distributed database systems
      • Replication to multiple nodes can be managed by defining a minimum number of nodes that must participate in a successful write operation
  • AWS provide services to reduce or remove single point of failure
    • Regions, Availability Zones with multiple data centers
    • ELB or Route 53 to configure health checks and mask failure by routing traffic to healthy endpoints
    • Auto Scaling to automatically replace unhealthy nodes
    • EC2 auto-recovery to recover unhealthy impaired nodes
    • S3, DynamoDB with data redundantly stored across multiple facilities
    • Multi-AZ RDS and Read Replicas
    • ElastiCache Redis engine supports replication with automatic failover
  • For more details refer to AWS Disaster Recovery Whitepaper

Optimize for Cost

  • AWS can help organizations reduce capital expenses and drive savings as a result of the AWS economies of scale
  • AWS provides different options which should be utilized as per use case –
    • EC2 instance types – On Demand, Reserved and Spot
    • Trusted Advisor or EC2 usage reports to identify the compute resources and their usage
    • S3 storage class – Standard, Reduced Redundancy, and Standard-Infrequent Access
    • EBS volumes – Magnetic, General Purpose SSD, Provisioned IOPS SSD
    • Cost Allocation tags to identify costs based on tags
    • Auto Scaling to horizontally scale the capacity up or down based on demand
    • Lambda based architectures to never pay for idle or redundant resources
    • Utilize managed services where scaling is handled by AWS for e.g. ELB, CloudFront, Kinesis, SQS, CloudSearch etc.

Caching

  • Caching improves application performance and increases the cost efficiency of an implementation
    • Application Data Caching
      • provides services thats helps store and retrieve information from fast, managed, in-memory caches
      • ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud and supports two open-source in-memory caching engines: Memcached and Redis
    • Edge Caching
      • allows content to be served by infrastructure that is closer to viewers, lowering latency and giving high, sustained data transfer rates needed to deliver large popular objects to end users at scale.
      • CloudFront is Content Delivery Network (CDN) consisting of multiple edge locations, that allows copies of static and dynamic content to be cached

Security

  • AWS works on shared security responsibility model
    • AWS is responsible for the security of the underlying cloud infrastructure
    • you are responsible for securing the workloads you deploy in AWS
  • AWS also provides ample security features
    • IAM to define a granular set of policies and assign them to users, groups, and AWS resources
    • IAM roles to assign short term credentials to resources, which are automatically distributed and rotated
    • Amazon Cognito, for mobile applications, which allows client devices to get controlled access to AWS resources via temporary tokens.
    • VPC to isolate parts of infrastructure through the use of subnets, security groups, and routing controls
    • WAF to help protect web applications from SQL injection and other vulnerabilities in the application code
    • CloudWatch logs to collect logs centrally as the servers are temporary
    • CloudTrail for auditing AWS API calls, which delivers a log file to S3 bucket. Logs can then be stored in an immutable manner and automatically processed to either notify or even take action on your behalf, protecting your organization from non-compliance
    • AWS Config, Amazon Inspector, and AWS Trusted Advisor to continually monitor for compliance or vulnerabilities giving a clear overview of which IT resources are in compliance, and which are not
  • For more details refer to AWS Security Whitepaper

References

Architecting for the Cloud: AWS Best Practices – Whitepaper