AWS Certified Developer – Associate DVA-C01 Exam Learning Path

AWS Certified Developer – Associate DVA-C01 Exam Learning Path

AWS Certified Developer – Associate DVA-C01 exam is the latest AWS exam and would replace the old Developer – Associate exam. It basically validates

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices.
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications using AWS.

Refer AWS Certified Developer – Associate (Released June 2018) Exam Blue Print

AWS Certified Developer - Associate June 2018 Domains

AWS Certified Developer – Associate DVA-C01 Summary

  • AWS Certified Developer – Associate DVA-C01 exam is quite different from the previous one with more focus on the hands-on development and deployment concepts rather then just the architectural concepts
  • AWS Certified Developer – Associate DVA-C01 exam covers a lot of latest AWS services like Lambda, X-Ray while focusing majorly on other services like DynamoDB, Elastic Beanstalk, S3, EC2

AWS Developer – Associate DVA-C01 Exam Resources

AWS Developer – Associate DVA-C01 Exam Topics

  • Be sure to cover the following topics
    • Compute
      • Understand what AWS services you can use to build a serverless architecture?
      • Make sure you know and understand Lambda and serverless architecture, its features and use cases.
      • Know Lambda limits for e.g. execution time, deployable zipped and unzipped package limit
      • Be sure to know how to deploy, package using Lambda.
      • Understand tracing of Lambda functions using X-Ray
      • Understand integration of Lambda with CloudWatch.
      • Understand how to handle multiple releases using Alias
      • Know AWS Step Functions to manage Lambda functions flow
      • Understand Lambda with API Gateway
      • Understand API Gateway stages, ability to cater to different environments for e.g. dev, test, prod
      • Understand EC2 as a whole
      • Understand EC2 Metadata & Userdata. Whats the use of each? How to look up instance data after it is launched.
      • Understand EC2 Security. How IAM Role work with EC2 instances.
      • Understand how does EC2 evaluates the order of credentials, when multiple are provided. Remember the order – Environment variables -> Java system properties -> Default credential profiles file -> ECS container credentials -> Instance Profile credentials
      • Know Elastic Beanstalk at a high level, what it provides and its ability to get an application running quickly
      • Understand Elastic Beanstalk configurations and deployment types with their advantages and disadvantages
    • Databases
      • Understand relational and NoSQLs data storage options which include RDS, DynamoDB and their use cases
      • Understand DynamoDB Secondary Indexes
      • Make sure you understand DynamoDB provisioned throughput for Read/Writes and its calculations
      • Make sure you understand DynamoDB Consistency Model – difference between Strongly Consistent and Eventual Consistency
      • Understand DynamoDB with its low latency performance, DAX
      • Know how to configure fine grained security for DynamoDB table, items, attributes
      • Understand DynamoDB Best Practices regarding
        • table design
        • provisioned throughput
        • Query vs Scan operations
        • improving Scan operation performance
      • Understand RDS features – Read Replicas for scalability, Multi-AZ for High Availability
      • Know ElastiCache use cases, mainly for caching performance
      • Understand ElastiCache Redis vs Memcached
    • Storage
      • Understand S3 storage option
      • Understand S3 Best Practices to improve performance for GET/PUT requests
      • Understand S3 features like different storage classes with lifecycle policies, static website hosting, versioning, Pre-Signed URLs for both upload and download, CORS
    • Security
      • Understand IAM as a whole
      • Focus on IAM role and its use case especially with EC2 instance
      • Know how to test and validate IAM policies
      • Understand IAM identity providers and federation and use cases
      • Understand how AWS Cognito works and what features it provides
      • Understand MFA and How would implement two factor authentication for your application
      • Understand KMS for key management and envelope encryption
      • Know what services support KMS
        • Remember SQS, Kinesis now provides SSE support
      • Focus on S3 with SSE, SSE-C, SSE-KMS. How they work and differ?
      • Know how can you enforce only buckets to only accept encrypted objects
      • Know various KMS encryption options encrypt, reencrypt, generateEncryptedDataKey etc
      • Know how KMS impacts the performance of the services
    • Management Tools
      • Understand CloudWatch monitoring to provide operational transparency
      • Know which EC2 metrics it can track.
      • Understand CloudWatch is extendable with custom metrics
      • Understand CloudTrail for Audit
    • Integration Tools
      • Understand SQS as message queuing service and SNS as pub/sub notification service
      • Understand SQS features like visibility, long poll vs short poll
      • Focus on SQS as a decoupling service
      • AWS has released SQS FIFO, make sure you know the differences between standard and FIFO
      • Know the different development and deployment tools like CodeCommit, CodeBuild, CodeDeploy, CodePipeline
    • Networking
      • Does not cover much on networking or designing of networks, but be sure you understand VPC, Subnets, Routes, Security Groups etc.

AWS Cloud Computing Whitepapers

AWS Certified Developer – Associate DVA-C01 Exam Contents

Domain 1: Deployment

  1. Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
  1. Deploy applications using Elastic Beanstalk.
  1. Prepare the application deployment package to be deployed to AWS.
  2. Deploy serverless applications.

Domain 2: Security

  1. Make authenticated calls to AWS services.
  1. Implement encryption using AWS services.
  2. Implement application authentication and authorization.

Domain 3: Development with AWS Services

  1. Write code for serverless applications.
  1. Translate functional requirements into application design.
  1. Implement application design into application code.
  2. Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.

Domain 4: Refactoring

  1. Optimize application to best use AWS services and features.
  2. Migrate existing application code to run on AWS.

Domain 5: Monitoring and Troubleshooting

  1. Write code that can be monitored.
  2. Perform root cause analysis on faults found in testing or production.

AWS Certified Solutions Architect – Associate SAA-C01 Exam Learning Path (Obsolete)

AWS Certified Solutions Architect – Associate SAA-C01 Exam Learning Path (Obsolete)

SAA-C01 is Obsolete now, Please refer SAA-C03 Learning Path

AWS Solutions Architect – Associate SAA-C01 exam is the latest AWS exam and would replace the old CSA-Associate exam. It basically validates the ability to effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS technologies

  • Define a solution using architectural design principles based on customer requirements.
  • Provide implementation guidance based on best practices to the organization throughout the life cycle of the project.

Refer AWS_Solution_Architect_-_Associate_SAA-C01_Exam_Blue_Print

AWS Certified Solutions Architect - Associate February 2018

AWS Solutions Architect – Associate SAA-C01 Exam Summary

  • AWS has updated the exam concepts from the focus being on individual services to more building of scalable, highly available, cost-effective, performant, resilient and operational effective architecture
  • Although, most of the services covered by the the old exam are the same. There are few new additions like API Gateway, Lambda, ECS, Aurora
  • Exam surely covers the architecture aspects in deep, so you must be able to visualize the architecture, even draw them out in the exam just to understand how it would work and how different services relate.
  • Be sure to cover the following topics
    • Networking
      • Be sure to create VPC from scratch. This is mandatory.
        • Create VPC and understand whats an CIDR.
        • Create public and private subnets, configure proper routes, security groups, NACLs.
        • Create Bastion for communication with instances
        • Create NAT Gateway or Instances for instances in private subnets to interact with internet
        • Create two tier architecture with application in public and database in private subnets
        • Create three tier architecture with web servers in public, application and database servers in private.
        • Make sure to understand how the communication happens between Internet, Public subnets, Private subnets, NAT, Bastion etc.
      • Understand VPC endpoints and what services it can help interact
      • Understand difference between NAT Gateway and NAT Instance
      • Understand how NAT high availability can be achieved
      • Understand CloudFront as CDN and the static and dynamic caching it provides, what can be its origin (it can point to on-premises sources)
      • Understand Route 53 for routing, health checks and various routing policies it provides and their use cases mainly for high availability
      • Be sure to cover ELB in deep. AWS has introduced ALB and NLB and there are lot of questions on ALB
      • Understand ALB features with its ability for content based and URL based routing with support for dynamic port mapping with ECS
    • Storage
      • Understand various storage options S3, EBS, Instance store, EFS, Glacier and what are the use cases and anti patterns for each
      • Would recommend referring Storage Options whitepaper, although a bit dated 90% still holds right
      • Understand various EBS volume types and their use cases in terms of IOPS and throughput. SSD for IOPS and HDD for throughput
      • Understand Burst performance and I/O credits to handle occasional peaks
      • Understand S3 features like different storage classes with lifecycle policies, static website hosting, versioning, Pre-Signed URLs for both upload and download, CORS
      • Understand Glacier as an archival storage with various retrieval patterns
      • Glacier Expedited retrieval now allows object retrieval within mins
      • Understand Storage gateway and its different types
    • Compute
      • Understand EC2 as a whole
      • Understand Auto Scaling and ELB, how they work together to provide High Available and Scalable solution
      • Understand EC2 various purchase types – Reserved, On-demand and Spot and their use cases
      • Understand Reserved purchase types with the introduction of Scheduled and Convertible types
      • Understand Lambda and serverless architecture, its features and use cases. How do you benefit from Lambda?
      • Understand ECS with its ability to deploy containers and micro services architecture
      • Know Elastic Beanstalk at a high level, what it provides and its ability to get an application running quickly
    • Databases
      • Understand relational and NoSQLs data storage options which include RDS, DynamoDB, Aurora and their use cases
      • Aurora has been added to the exam and most of time the questions refer to Aurora given its abilities for multiple read replicas and replication of data across AZs
      • Understand S3 is not a storage option for database
      • Understand RDS features – Read Replicas for scalability, Multi-AZ for High Availability, Automated Backups, underlying volume types
      • Understand DynamoDB with its low latency performance, DAX
      • Understand DynamoDB provisioned throughput for Read/Writes
      • Know ElastiCache use cases, mainly for caching performance
    • Analytics
      • Not much in deep, but understand what the services are and what they can do
      • Understand Redshift as a business intelligence tool
      • Know Kinesis for real time data capture and analytics
      • Atleast know what AWS Glue does, so you can eliminate the answer
    • Security
      • Understand IAM as a whole
      • Focus on IAM role and its use case especially with EC2 instance
      • Understand IAM identity providers and federation and use cases
      • Understand MFA and How would implement two factor authentication for your application
      • Understand encryption services
      • Refer Disaster Recovery whitepaper, be sure you know the different recovery types with impact on RTO/RPO.
    • Management Tools
      • Understand CloudWatch monitoring to provide operational transparency
      • Know which EC2 metrics it can track. Remember, it cannot track memory and disk space/swap utilization
      • Understand CloudWatch is extendable with custom metrics
      • Understand CloudTrail for Audit
      • Have a basic understanding of CloudFormation, OpsWorks
    • Integration Tools
      • Understand SQS as message queuing service and SNS as pub/sub notification service
      • Understand SQS features like visibility, long poll vs short poll
      • Focus on SQS as a decoupling service
      • AWS has released SQS FIFO, make sure you know the differences between standard and FIFO

NOTE: I have just marked the topics inline with the AWS Exam Blue Print. So be sure to check the same, as it is updated regularly and go through Whitepapers, FAQs and Re-Invent videos.

AWS Solutions Architect – Associate SAA-C01 Exam Resources

AWS Cloud Computing Whitepapers

AWS Solutions Architect – Associate Exam Contents

Domain 1: Design Resilient Architectures

  1. Choose reliable/resilient storage.
  2. Determine how to design decoupling mechanisms using AWS services.
  3. Determine how to design a multi-tier architecture solution.
  4. Determine how to design high availability and/or fault tolerant architectures.

Domain 2: Define Performant Architectures

  1. Choose performant storage and databases.
  2. Apply caching to improve performance.
  3. Design solutions for elasticity and scalability.

Domain 3: Specify Secure Applications and Architectures

  1. Determine how to secure application tiers.
  2. Determine how to secure data.
  3. Define the networking infrastructure for a single VPC application.

Domain 4: Design Cost-Optimized Architectures

  1. Determine how to design cost-optimized storage.
  2. Determine how to design cost-optimized compute.

Domain 5: Define Operationally-Excellent Architectures

  1. Choose design features in solutions that enable operational excellence.

Amazon EMR Best Practices

Best Practices for Using Amazon EMR

Amazon has made working with Hadoop a lot easier. You can launch an EMR cluster in minutes for big data processing, machine learning, and real-time stream processing with the Apache Hadoop ecosystem. You can use the Management Console or the command line to start several nodes with ease.

The EMR pricing has now changed from pay-per-hour to pay-per-second, which results in lower costs and you no longer have to worry about the hourly boundary.

EMR makes a whole bunch of the latest versions of open source software available to you. Currently, there are 19 open source projects and new releases are made every 4 to 6 weeks, so the latest versions of the open source projects are available. This is very useful, especially for rapidly evolving open source projects such as Apache Spark where each release contains critical bug fixes and features. However, you are not forced to upgrade; a new release is made available if you choose to use it. With EMR, you can spin up a bunch of instances and you could process massive volumes of data residing on S3 at a reasonable cost.

A variety of cluster management options are supported, including YARN. You can run the following:

  • HBase
  • Presto (low latency, distributed SQL engine)
  • Spark
  • Tez
  • ganglia
  • Zeppellin
  • Notebooks
  • SQL editors

AWS Connectors

Additionally, connectors to different AWS services are also available; for example, you can use Spark to load Redshift (using the Redshift connector, which Redshift commands under the hood to get a good throughput). You can access DynamoDB for analytics applications, Sqoop to access relational data, and so on.

AWS Glue

One particularly interesting connector is AWS Glue. AWS Glue comprises three main components:

AWS Glue

  • ETL service: This lets you drag things around to create serverless ETL pipelines.
  • AWS Glue Data Catalog: This is a fully managed Hive metastore-compliant service. Earlier, the systems ran an external Hive metastore database in RDS or Aurora. This was great. If you shut down your cluster, all your metadata was persisted so you didn’t have to recreate your tables with extra durability and availability (in case something happened to your metastore with MySQL on the master node). With Glue, all that is fully managed. You have an intelligent metastore—you don’t have to write DDL to create a table; you can just make Glue crawl your data, infer what the schema is, and create those tables for you. You can also make it add partitions, which can be painful otherwise—if you are constantly updating your Hive tables, you need a process to load that partition in—Glue catalog can do it for you. It also supports a variety of complex data types.
  • Crawlers: The crawlers let you crawl the data to infer the schema.

AWS Glue is a managed service, so you spend less time monitoring. As a fully managed service, it is also responsible for replacing unhealthy nodes and autoscaling. Enabling security options in AWS Glue is pretty easy. It supports full customization and control, and you don’t have to waste time creating and configuring the cluster. In most cases, the default settings are good enough, but even if you wanted to change them or install custom components, you have root access over all the boxes, so you can make any changes you need.

Common EMR use cases

EMR

Using HBase for random access at a massive scale involves a lot of customers who are running HBase with HDFS. Now there is support for HBase using S3 object store for HFiles. Also, there is the ability to use the Read Replica HBase cluster in another AZ. Shifting to S3 can save you 50% or higher on storage costs. Instead of sizing the cluster for HDFS, they can now size it for the amount of processing power required for the HBase Region Servers. The S3 option is also good for load balancing and disaster recovery across AZs. As S3 is available across a region, you don’t have to replicate the data twice; that is, you don’t need two full HDFS clusters. Now you can set up a smaller cluster for the Read Replicas that point to the same HFiles and you can drive the read traffic through there.

Real-time and batch processing involves utilizing EMR; you can use Kinesis for pushing data to Spark. Use Spark Streaming for real-time analytics or processing data on-the-fly and then dump that data into S3. If you don’t have real-time processing use cases, then Kinesis Firehose is a great alternative too. The data can be cataloged in the Glue Data Catalog and then you can have the data accessible via a variety of different analytical engines. EMR supports several analytical engines including Hive, Tez, and Spark. Once the data is in the Data Catalog on S3, you can use Athena (serverless SQL queries), Glue ETL (serverless ETL), and Redshift Spectrum.

Data exploration with Spark using Zeppelin or Jupyter notebook allows you to arm your data scientists with a way to explore large amounts of data (instead of using one node, you can now spread the data across the cluster). This also makes it easier to move it to production.

There is a big rise in the use of Presto for ad hoc SQL queries (in combination with Athena). They approach the same thing from two different angles. Presto gives you advanced configurations and a way to build exactly what you need for your use case but you have to deal with the cluster management versus Athena where you just go to the console and start writing SQL. Now, many BI tools support Presto as well for supporting low latency dashboards. You can also perform traditional batch processing workloads using Spark.

Deep learning with GPU instances is where you can launch GPU hardware for EMR. There’s support for MxNet. You can do end-to-end data engineering work. Support for TensorFlow is coming.

Typical ML projects implement a multi-step process, including ETL, feature engineering, model training, model evaluation, model deployment, and model scoring and updates. Such pipelines need to support batch model training and real-time ML model serving. Using Apache Spark for implementing ML pipelines is very popular as it supports each step in an ML pipeline, scales for small and large jobs, good ML libraries, and has an active user base.

There are several options for deploying Spark on AWS. For example, you can use EC2 as it can support for batch/streaming, integrates with tooling, spin up/down clusters, larger/smaller clusters. Additionally, it also has support for different versions of Hadoop and Spark. However, using EC2 for Spark deployment places a huge management burden on us. Hence, EMR can be a simpler and better alternative here. It is simple to provision and you can use a wizard (and then generate the commands for the command line from it if required). You can create tags for cost management and send logs to S3.

Lowering EMR costs

If you are paying for Hadoop nodes that are not doing anything, then you are just burning money. There are ways you can batch up your workloads. Take an inventory of the jobs you have and tweak them to run in a batch mode and shutdown the cluster in-between those times. You can separate out clusters with auto-scaling instead of sizing and running it for all your workloads. You should shut down the cluster when you can, to stop paying for it unnecessarily. You can use Amazon Linux AMI with preinstalled customizations for faster cluster creation and use auto-scaling to minimize costs for long-running clusters.

AWS Auto Scaling Lifecycle

Auto Scaling Lifecycle

  • Instances launched through the Auto Scaling group have a different lifecycle than that of other EC2 instances
  • Auto Scaling lifecycle starts when the Auto Scaling group launches an instance and puts it into service.
  • Auto Scaling lifecycle ends when the instance is terminated either by the user, or the Auto Scaling group takes it out of service and terminates it
  • AWS charges for the instances as soon as they are launched, including the time it is not in InService

Auto Scaling Lifecycle Transition

Auto Scaling Group Lifecycle

Auto Scaling Lifecycle Hooks

  • Auto Scaling Lifecycle hooks enable performing custom actions by pausing instances as an Auto Scaling group launches or terminates them
  • Each Auto Scaling group can have multiple lifecycle hooks. However, there is a limit on the number of hooks per Auto Scaling group
  • Auto Scaling scale out event flow
    • Instances start in the Pending state
    • If an autoscaling:EC2_INSTANCE_LAUNCHING  lifecycle hook is added, the state is moved to Pending:Wait
    • After the lifecycle action is completed, instances enter to Pending:Proceed
    • When the instances are fully configured, they are attached to the Auto Scaling group and moved to the InService state
  • Auto Scaling scale in event flow
    • Instances are detached from the Auto Scaling group and enter the Terminating state.
    • If an autoscaling:EC2_INSTANCE_TERMINATING lifecycle hook is added, the state is moved to Terminating:Wait
    • After the lifecycle action is completed, the instances enter the Terminating:Proceed state.
    • When the instances are fully terminated, they enter the Terminated state.
  • During the scale out and scale in events, instances are put into a wait state (Pending:Wait or Terminating:Wait) and are paused until either a continue action happens or the timeout period ends.
  • By default, the instance remains in a wait state for one hour, which can be extended by restarting the timeout period by recording a heartbeat.
  • If the task finishes before the timeout period ends, the lifecycle action can be marked completed and it continues the launch or termination process.
  • After the wait period, the Auto Scaling group continues the launch or terminate process (Pending:Proceed or Terminating:Proceed)
    • CloudWatch Events target to invoke a Lambda function when a lifecycle action occurs. Event contains information about the instance that is launching or terminating and a token that can be used to control the lifecycle action.
    • Notification target (CloudWatch events, SNS, SQS) for the lifecycle hook which receives the message from EC2 Auto Scaling.The message contains information about the instance that is launching or terminating and a token that can be used to control the lifecycle action
    • Create a script that runs on the instance as the instance starts. The script can control the lifecycle action using the ID of the instance on which it runs.Custom action can be implemented using

Auto Scaling Lifecycle Hooks Considerations

  • Keeping Instances in a Wait State
    • Instances remain in a wait state for a finite period of time.
    • Default is 1 hour (3600 seconds) with the max being 48 hours or 100 times the heartbeat timeout, whichever is smaller.
    • Time can be adjusted using
      • complete-lifecycle-action (CompleteLifecycleAction) command to continue to the next state if finishes before the timeout period end
      • put-lifecycle-hook command, the –heartbeat-timeout parameter to set the heartbeat timeout for the lifecycle hook during its creation
      • Restart the timeout period by recording a heartbeat, using the record-lifecycle-action-heartbeat (RecordLifecycleActionHeartbeat) command
  • Cooldowns and Custom Actions
    • Cooldown period helps ensure that the Auto Scaling group does not launch or terminate more instances than needed
    • Cooldown period starts when the instance enters the InService state. Any suspended scaling actions resume after cooldown period expires
  • Health Check Grace Period
    • Health check grace period does not start until the lifecycle hook completes and the instance enters the InService state
  • Lifecycle Action Result
    • Result of the lifecycle hook is either ABANDON or CONTINUE
    • If the instance is launching,
      • CONTINUE indicates a successful action, and the instance can be put into service.
      • ABANDON indicates the custom actions were unsuccessful, and that the instance can be terminated.
    • If the instance is terminating,
      • ABANDON and CONTINUE allow the instance to terminate.
      • However, ABANDON stops any remaining actions from other lifecycle hooks, while CONTINUE allows them to complete
  • Spot Instances
    • Lifecycle hooks can be used with Spot Instances. However, a lifecycle hook does not prevent an instance from terminating due to a change in the Spot Price, which can happen at any time

Enter and Exit Standby

  • Instance in an InService state can be moved toStandby state.
  • Standby state enables you to remove the instance from service, troubleshoot or make changes to it, and then put it back into service.
  • Instances in a Standby state continue to be managed by the Auto Scaling group. However, they are not an active part of the application until they are put back into service.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your application is running on EC2 in an Auto Scaling group. Bootstrapping is taking 20 minutes to complete. You find out that instances are shown as InService although the bootstrapping has not completed. How can you make sure that new instances are not added until the bootstrapping has finished. Choose the correct answer:
    1. Create a CloudWatch alarm with an SNS topic to send alarms to your DevOps engineer.
    2. Create a lifecycle hook to keep the instance in pending:wait state until the bootstrapping has finished and then put the instance in pending:proceed state.
    3. Increase the number of instances in your Auto Scaling group.
    4. Create a lifecycle hook to keep the instance in standby state until the bootstrapping has finished and then put the instance in pending:proceed state.
  2. When a scale out event occurs, the Auto Scaling group launches the required number of EC2 instances using its assigned launch configuration. What instance state do these instances start in? Choose the correct answer:
    1. pending:wait
    2. InService
    3. Pending
    4. Terminating
  3. With AWS Auto Scaling, once we apply a hook and the action is complete or the default wait state timeout runs out, the state changes to what, depending on which hook we have applied and what the instance is doing? Select two. Choose the 2 correct answers:
    1. pending:proceed
    2. pending:wait
    3. terminating:wait
    4. terminating:proceed
  4. For AWS Auto Scaling, what is the first transition state an existing instance enters after leaving steady state in Standby mode?
    1. Detaching
    2. Terminating:Wait
    3. Pending (You can put any instance that is in an InService state into a Standby state. This enables you to remove the instance from service, troubleshoot or make changes to it, and then put it back into service. Instances in a Standby state continue to be managed by the Auto Scaling group. However, they are not an active part of your application until you put them back into service. Refer link)
    4. EnteringStandby
  5. For AWS Auto Scaling, what is the first transition state an instance enters after leaving steady state when scaling in due to health check failure or decreased load?
    1. Terminating (When Auto Scaling responds to a scale in event, it terminates one or more instances. These instances are detached from the Auto Scaling group and enter the Terminating state. Refer link)
    2. Detaching
    3. Terminating:Wait
    4. EnteringStandby

References

AutoScalingGroupLifecycle

 

AWS CloudWatch Logs – Certification

AWS CloudWatch Logs

  • CloudWatch Logs can be used to monitor, store, and access log files from  EC2 instances, CloudTrail, Route 53, and other sources
  • CloudWatch Logs uses the log data for monitoring in an not; so, no code changes are required
  • CloudWatch Logs require CloudWatch logs agent to be installed on the EC2 instances and on-premises servers.
  • CloudWatch Logs agent makes it easy to quickly send both rotated and non-rotated log data off of a host and into the log service.
  • An VPC endpoint can be configured to keep traffic between VPC and CloudWatch Logs from leaving the Amazon network. It doesn’t require an IGW, NAT, VPN connection, or Direct Connect connection
  • CloudWatch Logs allows exporting log data from the log groups to an S3 bucket, which can then be used for custom processing and analysis, or to load onto other systems.
  • Log data is encrypted while in transit and while it is at rest
  • Log data can be encrypted using an AWS KMS or customer master key (CMK).

Required Mainly for SysOps Associate & DevOps Professional Exam

CloudWatch Logs Concepts

Log Events

  • A log event is a record of some activity recorded by the application or resource being monitored.
  • Log event record contains two properties: the timestamp of when the event occurred, and the raw event message

Log Streams

  • A log stream is a sequence of log events that share the same source for e.g. logs events from an Apache access log on a specific host.

Log Groups

  • Log groups define groups of log streams that share the same retention, monitoring, and access control settings for e.g. Apache access logs from each host grouped through log streams into a single log group
  • Each log stream has to belong to one log group
  • There is no limit on the number of log streams that can belong to one log group.

Metric Filters

  • Metric filters can be used to extract metric observations from ingested events and transform them to data points in a CloudWatch metric.
  • Metric filters are assigned to log groups, and all of the filters assigned to a log group are applied to their log streams.

Retention Settings

  • Retention settings can be used to specify how long log events are kept in CloudWatch Logs.
  • Expired log events get deleted automatically.
  • Retention settings are assigned to log groups, and the retention assigned to a log group is applied to their log streams.

CloudWatch Logs Use cases

Monitor Logs from EC2 Instances in Real-time

  • can help monitor applications and systems using log data
  • can help track number of errors for e.g. 404, 500, for even specific literal terms “NullReferenceException”, occurring in the applications, which can then be matched to a threshold to send notification

Monitor AWS CloudTrail Logged Events

  • can be used to monitor particular API activity as captured by CloudTrail by creating alarms in CloudWatch and receive notifications

Archive Log Data

  • can help store the log data in highly durable storage, an alternative to S3
  • log retention setting can be modified, so that any log events older than this setting are automatically deleted.

Log Route 53 DNS Queries

  • can help log information about the DNS queries that Route 53 receives.

Real-time Processing of Log Data with Subscriptions

  • Subscriptions can help get access to real-time feed of logs events from CloudWatch logs and have it delivered to other services such as Kinesis stream, Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems
  • A subscription filter defines the filter pattern to use for filtering which log events get delivered to the AWS resource, as well as information about where to send matching log events to.
  • CloudWatch Logs log group can also be configured to stream data Elasticsearch Service cluster in near real-time

Searching and Filtering

  • CloudWatch Logs allows searching and filtering the log data by creating one or more metric filters.
  • Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs.
  • CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that can be put as graph or set an alarm on.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Once we have our logs in CloudWatch, we can do a number of things such as: Choose 3. Choose the 3 correct answers:[CDOP]
    1. Send the log data to AWS Lambda for custom processing or to load into other systems
    2. Stream the log data to Amazon Kinesis
    3. Stream the log data into Amazon Elasticsearch in near real-time with CloudWatch Logs subscriptions.
    4. Record API calls for your AWS account and delivers log files containing API calls to your Amazon S3 bucket
  2. You have decided to set the threshold for errors on your application to a certain number and once that threshold is reached you need to alert the Senior DevOps engineer. What is the best way to do this? Choose 3. Choose the 3 correct answers: [CDOP]
    1. Set the threshold your application can tolerate in a CloudWatch Logs group and link a CloudWatch alarm on that threshold.
    2. Use CloudWatch Logs agent to send log data from the app to CloudWatch Logs from Amazon EC2 instances
    3. Pipe data from EC2 to the application logs using AWS Data Pipeline and CloudWatch
    4. Once a CloudWatch alarm is triggered, use SNS to notify the Senior DevOps Engineer.
  3. You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO? [CDOP]
    1. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues. (is not fast in search and introduces delay)
    2. Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc MapReduce analysis and write new queries when needed. (is not fast in search and introduces delay)
    3. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues. (is not fast in search and introduces delay)
    4. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (ELK – Elasticsearch, Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)
  4. You use Amazon CloudWatch as your primary monitoring system for your web application. After a recent software deployment, your users are getting Intermittent 500 Internal Server Errors when using the web application. You want to create a CloudWatch alarm, and notify an on-call engineer when these occur. How can you accomplish this using AWS services? (Choose three.) [CDOP]
    1. Deploy your web application as an AWS Elastic Beanstalk application. Use the default Elastic Beanstalk CloudWatch metrics to capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric.
    2. Install a CloudWatch Logs Agent on your servers to stream web application logs to CloudWatch.
    3. Use Amazon Simple Email Service to notify an on-call engineer when a CloudWatch alarm is triggered.
    4. Create a CloudWatch Logs group and define metric filters that capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric.
    5. Use Amazon Simple Notification Service to notify an on-call engineer when a CloudWatch alarm is triggered.
    6. Use AWS Data Pipeline to stream web application logs from your servers to CloudWatch.

References

AWS_CloudWatch_Logs_User_Guide

AWS OpsWorks Deployment Strategies – Certification

AWS OpsWorks Deployment Strategies

NOTE: Advanced Topic required for DevOps Professional Exam Only

All at Once Deployment

  • OpsWorks Stacks does not automatically deploy updated code to online instances, and needs to be done manually
  • Deploy command (for apps) or Update Custom Cookbooks command (for cookbooks) helps deploy the update to every instance concurrently
  • Approach is simple and fast, but leads to a downtime incase of error
  • OpsWorks allows rollback to restore previously deployed app version
  • By default, AWS OpsWorks Stacks stores the five most recent deployments, which allows you to roll back up to four versions

Rolling Deployment

  • A rolling deployment updates an application on a stack’s online application server instances in multiple phases.
  • With each phase, a subset of the online instances can be updated and verified to be successful before starting the next phase.
  • In case of any issues, the instances running the old app version can continue to handle incoming traffic until the issues are resolved.
  • Steps to perform Rolling deployment
    • Deploy the app on a single application server instance.
    • The instance can be deregistered from the load balancer, to prevent it from serving traffic
    • Verify the app is working fine
    • Deploy the update on the remainder of instances

Blue Green Deployment

  • Blue Green deployment can be achieved using separate stack for each phase of the application’s lifecycle.
  • Different stacks are sometimes referred to as environments like development, staging, production etc.
    • Blue environment is the production stack, which hosts the current application.
    • Green environment is the staging stack, which hosts the updated application.
  • Development and testing can be performed on stacks, which are not publicly accessible, and when ready the traffic can be switched.
  • Steps for Blue Green deployment with OpsWorks Stacks stacks, in conjunction with Route 53 and a pool of ELB load balancers
    • Attach unused ELB from the pool to the green stack’s application server layer
    • After all of the green stack’s instances have passed the ELB health check, the weights in Route 53 can be changed to route traffic gradually from Blue to Green stack.
    • Once the Green stack works fines and is ready to handle all traffic
    • Detach the load balancer from the old blue stack’s application server layer and return it to the pool
    • Blue stack can be retained for some time, so that if any issues the update can be rolled back by reversing the procedure to direct incoming traffic back to the old blue stack

OpsWorks Blue Green Deployment

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You company runs a complex customer relations management system that consists of around 10 different software components all backed by the same Amazon Relational Database (RDS) database. You adopted AWS OpsWorks to simplify management and deployment of that application and created an AWS OpsWorks stack with layers for each of the individual components. An internal security policy requires that all instances should run on the latest Amazon Linux AMI and that instances must be replaced within one month after the latest Amazon Linux AMI has been released. AMI replacements should be done without incurring application downtime or capacity problems. You decide to write a script to be run as soon as a new Amazon Linux AMI is released. Which solutions support the security policy and meet your requirements? Choose 2 answers
    1. Assign a custom recipe to each layer, which replaces the underlying AMI. Use AWS OpsWorks life-cycle events to incrementally execute this custom recipe and update the instances with the new AMI. (AMI cannot be updated using recipes)
    2. Create a new stack and layers with identical configuration, add instances with the latest Amazon Linux AMI specified as a custom AMI to the new layer, switch DNS to the new stack, and tear down the old stack. (Blue-Green Deployment)
    3. Identify all Amazon Elastic Compute Cloud (EC2) instances of your AWS OpsWorks stack, stop each instance, replace the AMI ID property with the ID of the latest Amazon Linux AMI ID, and restart the instance. To avoid downtime, make sure not more than one instance is stopped at the same time. (Instances cannot be updated by updating the AMI id and needs to be launched anew)
    4. Specify the latest Amazon Linux AMI as a custom AMI at the stack level, terminate instances of the stack and let AWS OpsWorks launch new instances with the new AMI. (Would result in downtime)
    5. Add new instances with the latest Amazon Linux AMI specified as a custom AMI to all AWS OpsWorks layers of your stack, and terminate the old ones. (Disposable Rolling deployment)
  2. A company has developed a Ruby on Rails content management platform. Currently, OpsWorks with several stacks for dev, staging, and production is being used to deploy and manage the application. Now the company wants to start using Python instead of Ruby. How should the company manage the new deployment?
    1. Update the existing stack with Python application code and deploy the application using the deploy life-cycle action to implement the application code.
    2. Create a new stack that contains a new layer with the Python code. To cut over to the new stack the company should consider using Blue/Green deployment
    3. Create a new stack that contains the Python application code and manage separate deployments of the application via the secondary stack using the deploy lifecycle action to implement the application code.
    4. Create a new stack that contains the Python application code and manages separate deployments of the application via the secondary stack

References

OpsWorks Deployment Best Practices

AWS Certified Cloud Practitioner (CLF-C01) Exam Learning Path

AWS Certified Cloud Practitioner Exam (CLF-C01) Learning Path

  • AWS Certified Cloud Practitioner Exam is mainly a high-level introduction to Cloud Computing, AWS Cloud, its advantages, its services, pricing, and support plans.
  • AWS Certified Cloud Practitioner exam is a good exam to start your AWS journey with and also provides non-technical professionals to know what AWS has to offer.
  • AWS Certified Cloud Practitioner Exam has 65 questions to be answered in 100 minutes.
  • AWS Certified Cloud Practitioner Exam is the only exam currently that can be taken online, without having to visit a test center.
  • Be sure to have a good internet connection, comfortable seating, id cards ready and you should be good to go.

AWS Certified Cloud Practitioner exam basically validates the following

  • Define what the AWS Cloud is and the basic global infrastructure
  • Describe basic AWS Cloud architectural principles
  • Describe the AWS Cloud value proposition
  • Describe key services on the AWS platform and their common use cases (for example, compute and analytics)
  • Describe basic security and compliance aspects of the AWS platform and the shared security model
  • Define the billing, account management, and pricing models;
  • Identify sources of documentation or technical assistance (for example, white papers or support tickets); and
  • Describe basic/core characteristics of deploying and operating in the AWS Cloud.

Refer to the AWS Certified Cloud Practitioner Exam guide
AWS Certified Cloud Practitioner (CLF-C01) Exam Domains

AWS Certified Cloud Practitioner Exam Resources

AWS Cloud Computing Whitepapers

AWS Certified Cloud Practitioner Exam Contents

Domain 1: Cloud Concepts

  • 1.1 Define the AWS Cloud and its value proposition
    • Agility – Speed, Experimentation, Innovation
    • Elasticity – Scale on demand, Eliminate wasted capacity
    • Availability – Spread across multiple zones
    • Flexibility – Broad set of products, Low to no cost to entry
    • Security – Compliant many compliance certifications, Shared responsibility model
  • 1.2 Identify aspects of AWS Cloud economics
    • Advantages of Cloud Computing
      • Trade capital expense for variable expense
      • Benefit from massive economies of scale
      • Stop guessing about capacity
      • Increase speed and agility
      • Stop spending money running and maintaining data centers
      • Go global in minutes
    • AWS Well-Architected Framework
      • Features include agility, security, reliability, performance efficiency, cost optimization, and operational excellence.
  • 1.3 List the different cloud architecture design principles

Domain 2: Security

  • 2.1 Define the AWS Shared Responsibility model
    • includes having a clear understanding of what AWS and Customer responsibilities are 
  • 2.2 Define AWS Cloud security and compliance concepts
  • 2.3 Identify AWS access management capabilities
    • includes services like IAM
  • 2.4 Identify resources for security support

Domain 3: Technology

  • 3.1 Define methods of deploying and operating in the AWS Cloud
  • 3.2 Define the AWS global infrastructure
    • includes AWS concepts of regions, AZs and edge locations
  • 3.3 Identify the core AWS services
    • Includes AWS Services Overview and focuses on high level knowledge of (but surely not deep enough)
      • Compute Services
      • Storage Services –
        • S3 – object storage, static website hosting. Know S3 subresources esp. versioning, server access logging
        • EBS
        • EFS – shared file storage that can be shared between on-premises and AWS resources
        • Glacier – archival long term storage
      • Security, Identity, and Compliance –
        • IAM – 
        • Organizations,
        • WAF
        • AWS Inspector – automated application security assessment 
        • AWS GuardDuty – threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect the AWS accounts and workloads
        • AWS Artifact – On-demand access to AWS’ compliance reports
      • Databases –
      • Migration – Database Migration Service
      • Networking and Content Delivery
        • VPC
        • CloudFront – caching and helps improve performance and reduce latency
        • Route 53 – dns and domain registration. Also understand different routing policies esp. latency for Global traffic
        • VPN & Direct Connect – provides connectivity between on-premises and AWS Cloud
        • ELB – helps distribute load across multiple resources
        • AWS Global Accelerator – Improve application availability and performance
      • Management Tools
      • Messaging – SQS, SNS
  • 3.4 Identify resources for technology support
    • includes AWS Support Models and the key features and benefits the model provides to the customers
      • know only Enterprise support plan provides
        • dedicated TAM (Technical Account Manager)
        • Well-Architected Review delivered by AWS Solution Architects
        • Account Assistance by Assigned Support Concierge
        • SLA of < 15 mins for business critical events
      • know only Business and above support plan provides
        • 24×7 access to Cloud Support Engineers via email, chat & phone
        • Full Access to Trusted Advisor checks

Domain 4: Billing and Pricing

  • 4.1 Compare and contrast the various pricing models for AWS
    • includes AWS Pricing
      • know EC2 pricing models esp. Spot and Reserved
      • know Lambda pricing which is based on invocations and time
  • 4.2 Recognize the various account structures in relation to AWS billing and pricing
  • 4.3 Identify resources available for billing support
    • includes tools like TCO Calculator which helps compare cost of running applications in an on-premises or colocation environment to AWS
    • includes Cost Explorer allows you to view and analyze costs (hint – Spend forecasting)

AWS Services Overview – Whitepaper – Certification

AWS Services Overview

AWS consists of many cloud services that can be use in combinations tailored to meet business or organizational needs. This section introduces the major AWS services by category.


NOTE – This post provides a brief overview of AWS services. Its is good introduction to start all certifications. However, It is more relevant and most important for AWS Cloud Practitioner Certification Exam.


Common Features

  • Almost the features can be access control through AWS Identity Access Management – IAM
  • Services managed by AWS are all made Scalable and Highly Available, without any changes needed from the user

AWS Access

AWS allows accessing its services through unified tools using

  • AWS Management Console – a simple and intuitive user interface
  • AWS Command Line Interface (CLI) – programatic access through scripts
  • AWS Software Development Kits (SDKs) – programatic access through Application Program Interface (API) tailored for programming language (Java, .NET, Node.js, PHP, Python, Ruby, Go, C++, AWS Mobile SDK) or platform (Android, Browser, iOS)

Security, Identity, and Compliance

Amazon Cloud Directory

  • enables building flexible, cloud-native directories for organizing hierarchies of data along multiple dimensions, whereas traditional directory solutions limit to a single directory
  • helps create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries.

AWS Identity and Access Management

  • enables you to securely control access to AWS services and resources for the users.
  • allows creation of AWS users, groups and roles, and use permissions to allow and deny their access to AWS resources
  • helps manage IAM users and their access with individual security credentials like access keys, passwords, and multi-factor authentication devices, or request temporary security credentials to provide users
  • helps role creation & manage permissions to control which operations can be performed by the which entity, or AWS service, that assumes the role
  • enables identity federation to allow existing identities (users, groups, and roles) in the enterprise to access AWS Management Console, call AWS APIs, access resources, without the need to create an IAM user for each identity.

Amazon Inspector

  • is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.
  • automatically assesses applications for vulnerabilities or deviations from best practices
  • produces a detailed list of security findings prioritized by level of severity.

AWS Certificate Manager

  • helps provision, manage, and deploy Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services like ELB
  • removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates.

AWS CloudHSM

  • helps meet corporate, contractual, and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) appliances within the AWS Cloud.
  • allows protection of encryption keys within HSMs, designed and validated to government standards for secure key management.
  • helps comply with strict key management requirements without sacrificing application performance.

AWS Directory Service

  • provides Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD, that enables directory-aware workloads and AWS resources to use managed Active Directory in the AWS Cloud.

AWS Key Management Service

  • is a managed service that makes it easy to create and control the encryption keys used to encrypt your data.
  • uses HSMs to protect the security of your keys.

AWS Organizations

  • allows creation of AWS accounts groups, to more easily manage security and automation settings collectively
  • helps centrally manage multiple accounts to help scale.
  • helps to control which AWS services are available to individual accounts, automate new account creation, and simplify billing.

AWS Shield

  • is a managed Distributed Denial of Service (DDoS) protection service that safeguards web applications running on AWS.
  • provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
  • provides two tiers of AWS Shield: Standard and Advanced.

AWS WAF

  • is a web application firewall that helps protect web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.
  • gives complete control over which traffic to allow or block to web application by defining customizable web security rules.

AWS Compute Services

Amazon Elastic Compute Cloud (EC2)

  • provides secure, resizable compute capacity
  • provide complete control of the computing resources (root access, ability to start, stop, terminate instances etc.)
  • reduces the time required to obtain and boot new instances to minutes
  • allows quick scaling of capacity, both up and down, as the computing requirements changes
  • provides developers and sysadmins tools to build failure resilient applications and isolate themselves from common failure scenarios.
  • Benefits
    • Elastic Web-Scale Computing
      • enables scaling to increase or decrease capacity within minutes, not hours or days.
    • Flexible Cloud Hosting Services
      • flexibility to choose from multiple instance types, operating systems, and software packages.
      • selection of memory configuration, CPU, instance storage, and boot partition size
    • Reliable
      • offers a highly reliable environment where replacement instances can be rapidly and predictably commissioned.
      • runs within AWS’s proven network infrastructure and data centers.
      • EC2 Service Level Agreement (SLA) commitment is 99.95% availability for each Region.
    • Secure
      • works in conjunction with VPC to provide security and robust networking functionality for your compute resources.
      • allows control of IP address, exposure to Internet (using subnets), inbound and outbound access (using Security groups and NACLs)
      • existing IT infrastructure can be connected to the resources in the VPC using industry-standard encrypted IPsec virtual private network (VPN) connections
    • Inexpensive – pay only for the capacity actually used
  • EC2 Purchasing Options and Types
    • On-Demand Instances
      • pay for compute capacity by the hour with no long-term commitments
      • enables to increase or decrease compute capacity depending on the demands and only pay the specified hourly rate for used instances
      • frees from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs.
      • also helps remove the need to buy “safety net” capacity to handle periodic traffic spikes.
    • Reserved Instances
      • provides significant discount (up to 75%) compared to On-Demand instance pricing.
      • provides flexibility to change families, operating system types, and tenancies with Convertible Reserved Instances.
    • Spot Instances
      • allow you to bid on spare EC2 computing capacity.
      • are often available at a discount compared to On-Demand pricing, helping reduce the application cost, grow it’s compute capacity and throughput for the same budget
    • Dedicated Instances – that run on hardware dedicated to a single customer for additional isolation.
    • Dedicated Hosts
      • are physical servers with EC2 instance capacity fully dedicated to your use.
      • can help you address compliance requirements and reduce costs by allowing you to use your existing server-bound software licenses.

Amazon EC2 Container Service

  • is a highly scalable, high-performance container management service that supports Docker containers.
  • allows running applications on a managed cluster of EC2 instances
  • eliminates the need to install, operate, and scale cluster management infrastructure.
  • can use to schedule the placement of containers across the cluster based on the resource needs and availability requirements.
  • custom scheduler or third-party schedulers can be integrated to meet business or application-specific requirements.

Amazon EC2 Container Registry

  • is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.
  • is integrated with Amazon EC2 Container Service (ECS), simplifying development to production workflow.
  • eliminates the need to operate container repositories or worry about scaling the underlying infrastructure.
  • hosts images in a highly available and scalable architecture
  • pay only for the amount of data stored and data transferred to the Internet.

Amazon Lightsail

  • is designed to be the easiest way to launch and manage a virtual private server with AWS.
  • plans include everything needed to jumpstart a project – a virtual machine, SSD-based storage, data transfer, DNS management, and a static IP address- for a low, predictable price.

AWS Batch

  • enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS.
  • dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory-optimized instances) based on the volume and specific resource requirements of the batch jobs submitted.
  • plans, schedules, and executes the batch computing workloads across the full range of AWS compute services and features

AWS Elastic Beanstalk

  • is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and Internet Information Services (IIS)
  • automatically handles the deployment, from capacity provisioning, load balancing, and auto scaling to application health monitoring.
  • provides full control over the AWS resources with access to the underlying resources at any time.

AWS Lambda

  • enables running code without zero administration, provisioning or managing servers, and scaling for high availability
  • pay only for the compute time consumed – there is no charge when the code is not running
  • can be setup to be automatically triggered from other AWS services, or called it directly from any web or mobile app.

Auto Scaling

  • helps maintain application availability
  • allows scaling EC2 capacity up or down automatically according to defined conditions or demand spikes to reduce cost
  • helps ensure desired number of EC2 instances are running always
  • well suited both to applications that have stable demand patterns and applications that experience hourly, daily, or weekly variability in usage.

Storage

Simple Storage Service

  • is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web.
  • S3 Features
    • Durable
      • designed for durability of 99.999999999% of objects
      • data is redundantly stored across multiple facilities and multiple devices in each facility.
    • Available – designed for up to 99.99% availability (standard) of objects over a given year and is backed by the S3 Service Level Agreement
    • Scalable – can help store virtually unlimited data
    • Secure
      • supports data in motion over SSL and data at rest encryption
      • bucket policies and IAM can help manage object permissions and control access to the data
    • Low Cost
      • provides storage at a very low cost.
      • using lifecycle policies, the data can be automatically tiered into lower cost, longer-term cloud storage classes like S3 Standard – Infrequent Access and Glacier for archiving.

Elastic Block Store (EBS)

  • provides persistent block storage volumes for use with EC2 instance
  • offers the consistent and low-latency performance needed to run workloads.
  • allows scaling up or down within minutes – all while paying a low price for only what is provisioned
  • EBS Features
    • High Performance Volumes – Choose between SSD backed or HDD backed volumes to deliver the performance needed
    • Availability
      • is designed for 99.999% availability
      • automatically replicates within its Availability Zone to protect from component failure, offering high availability and durability.
    • Encryption – provides seamless support for data-at-rest and data-in-transit between EC2 instances and EBS volumes.
    • Snapshots – protect data by creating point-in-time snapshots of EBS volumes, which are backed up to S3 for long-term durability.

Elastic File System (EFS)

  • provides simple, scalable file storage for use with EC2 instances
  • storage capacity is elastic, growing and shrinking automatically as files are added and removed
  • provides a standard file system interface and file system access semantics, when mounted on EC2 instances
  • works in shared mode, where multiple EC2 instances can access an EFS file system at the same time, allowing EFS to provide a common data
    source for workloads and applications running on more than one EC2 instance.
  • can be mounted on on-premises data center servers when connected to the VPC with AWS Direct Connect.
  • can be mounted on on-premises servers to migrate data sets to EFS, enable cloud bursting scenarios, or backup on-premises data to EFS.
  • is designed for high availability and durability, and provides performance for a broad spectrum of workloads and applications, including big data and analytics, media processing workflows, content management, web serving, and home directories.

Glacier

  • provides secure, durable, and extremely low-cost storage service for data archiving and long-term backup
  • To keep costs low yet suitable for varying retrieval needs, Glacier provides three options for access to archives, from a few minutes to several hours.

AWS Storage Gateway

  • seamlessly enables hybrid storage between on-premises storage environments and the AWS Cloud
  • combines a multi-protocol storage appliance with highly efficient network connectivity to AWS cloud storage services, delivering local
    performance with virtually unlimited scale.
  • use it in remote offices and data centers for hybrid cloud workloads involving migration, bursting, and storage tiering

Databases

Aurora

  • is a MySQL and PostgreSQL compatible relational database engine
  • provides the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases.
  • Benefits
    • Highly Secure
      • provides multiple levels of security, including
        • network isolation using VPC
        • encryption at rest using keys created and controlled through AWS Key Management Service (KMS), and
        • encryption of data in transit using SSL.
      • with an an encrypted Aurora instance, automated backups, snapshots, and replicas are also encrypted
    • Highly Scalable – automatically grows storage as needed
    • High Availability and Durability
      • designed to offer greater than 99.99% availability
      • recovery from physical storage failures is transparent, and instance failover typically requires less than 30 seconds
      • is fault-tolerant and self-healing. Six copies of the data are replicated across three AZs and continuously backed up to S3.
      • automatically and continuously monitors and backs up your database to S3, enabling granular point-in-time recovery.
    • Fully Managed – is a fully managed database service, and database management tasks such as hardware provisioning, software patching, setup, configuration, monitoring, or backups is taken care of

Relational Database Service (RDS)

  • makes it easy to set up, operate, and scale a relational database
  • provides cost-efficient and resizable capacity while managing time-consuming database administration tasks
  • supports various, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, and Microsoft SQL Server
  • Benefits
    • Fast and Easy to Administer – No need for infrastructure provisioning, and no need for installing and maintaining database software.
    • Highly Scalable
      • allows quick and easy scaling of database’s compute and storage resources, often with no downtime.
      • allows offloading read traffic from primary database using Read Replicas, for few RDS engine types
    • Available and Durable
      • runs on the same highly reliable infrastructure
      • allows Multi-AZ DB instance, where RDS synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
      • enhances reliability for critical production databases, by enabling automated backups, database snapshots, and automatic host replacement.
    • Secure
      • provides multiple levels of security, including
        • network isolation using VPC
        • connect to on-premises existing IT infrastructure through an industry-standard encrypted IPsec VPN
        • encryption at rest using keys created and controlled through AWS Key Management Service (KMS), and
        • offer encryption at rest and encryption in transit.
      • with an an encrypted instance, automated backups, snapshots, and replicas are also encrypted
    • Inexpensive – pay very low rates and only for the consumed resources, while taking advantage of on-demand and reserved instance types

DynamoDB

  • fully managed, fast and flexible NoSQL database service for applications that need consistent, single-digit millisecond latency at any scale.
  • supports both document and key-value data models.
  • flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, Internet of Things (IoT), and other applications
  • Benefits
    • Fast, Consistent Performance
      • designed to deliver consistent, fast performance at any scale
      • uses automatic partitioning and SSD technologies to meet throughput requirements and deliver low latencies at any scale.
    • Highly Scalable – it manages all the scaling to achieve the specified throughput capacity requirements
    • Event-Driven Programming – integrates with AWS Lambda to provide Triggers that enable architecting applications that automatically react to data changes.

ElastiCache

  • is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud.
  • helps improves the performance of web applications by caching results and allowing to retrieve information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases.
  • supports two open-source in-memory caching engines: Redis and Memcached

Migration

AWS Application Discovery Service

  • helps systems integrators quickly and reliably plan application migration projects by automatically identifying applications running in on-premises
    data centers, their associated dependencies, and performance profiles
  • automatically collects configuration and usage data from servers, storage, and networking equipment to develop a list of applications, how they
    perform, and how they are interdependent
  • information is retained in encrypted format in an AWS Application Discovery Service database, which you can export as a CSV or XML file into your preferred visualization tool or cloud migration solution to help reduce the complexity and time in planning your cloud migration.

AWS Database Migration Service

  • helps migrate databases to AWS easily and securely
  • source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
  • supports homogenous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL.
  • allows streaming of data to Redshift from any of the supported sources including Aurora, PostgreSQL, MySQL, MariaDB, Oracle, SAP ASE, and SQL Server, enabling consolidation and easy analysis of data in the petabyte-scale data warehouse
  • can also be used for continuous data replication with high availability.

AWS Server Migration Service

  • is an agentless service which makes it easier and faster to migrate thousands of on-premises workloads to AWS

Snowball

  • is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS.
  • addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns.
  • uses multiple layers of security designed to protect the data including tamper resistant enclosures, 256-bit encryption, and an industry-standard Trusted Platform Module (TPM) designed to ensure both security and full chain of custody of your data.
  • performs a software erasure of the Snowball appliance, once the data transfer job has been processed

Snowball Edge

  • is a 100 TB data transfer device with on-board storage and compute capabilities.
  • can be used to move large amounts of data into and out of AWS, as a temporary storage tier for large local datasets, or to support local workloads in remote or offline locations.
  • multiple devices can be clustered together to form a local storage tier and process the data on-premises, helping ensure the applications continue to run even when they are not able to access the cloud

Snowmobile

  • is an exabyte-scale data transfer service used to move extremely large amounts of data to AWS.
  • provides secure, fast, and cost effective transfer of data
  • data cane be imported into S3 or Glacier, once data loaded
  • uses multiple layers of security designed to protect the data including dedicated security personnel, GPS tracking, alarm monitoring, 24/7 video surveillance, and an optional escort security vehicle while in transit.
  • all data is encrypted with 256-bit encryption keys managed through KMS and designed to ensure both security and full chain of custody of the data

Networking and Content Delivery

Virtual Private Cloud (VPC)

  • helps provision a logically isolated section of the AWS Cloud where AWS resources can be launched in a virtual network that you define
  • provides complete control over the virtual networking environment, including selection of IP address range, creation of subnets (public and private), and configuration of route tables and network gateways.
  • allows use of both IPv4 and IPv6 for secure and easy access to resources and applications
  • allows multiple layers of security, including security groups and network access control lists, to help control access resources
  • allows creation of a hardware virtual private network (VPN) connection between the corporate data center and VPC and leverage the AWS Cloud as an extension of corporate data center.

CloudFront

  • is a global content delivery network (CDN) service that accelerates delivery of websites, APIs, video content, or other web assets.
  • can be used to deliver entire website, including dynamic, static, streaming, and interactive content using a global network of edge locations.
  • allows requests for the content to be automatically routed to the nearest edge location, so content is delivered with the best possible performance.
  • is optimized to work with other services in AWS, such as S3, EC2, ELB, and Route 53 as well as with any non-AWS origin server that stores the original, definitive versions of your files.

Route 53

  • is a highly available and scalable Domain Name System (DNS) web service
  • effectively connects user requests to infrastructure running in AWS – such as EC2 instances, ELB, or S3 buckets—and can also be used to route users to infrastructure outside of AWS.
  • helps configure DNS health checks to route traffic to healthy endpoints or to independently monitor the health of your application and its endpoints.
  • allows traffic management globally through a variety of routing types, including latency-based routing, Geo DNS, and weighted round robin – all of which can be combined with DNS Failover in order to enable a variety of low-latency, fault-tolerant architectures.
  • is fully compliant with IPv6 as well
  • offers Domain Name Registration service

Direct Connect

  • makes it easy to establish a dedicated network connection with on- premises to AWS
  • helps establish private connectivity between AWS and data center, office, or co-location environment,
  • helps increase bandwidth throughput, reduce network costs, , and provide a more consistent network experience than Internet-based connections

Elastic Load Balancing (ELB)

  • automatically distributes incoming application traffic across multiple EC2 instances
  • enables achieve greater levels of fault tolerance by seamlessly providing the required amount of load balancing capacity needed to distribute application traffic.
  • offers two types of load balancers that both feature high availability, automatic scaling, and robust security.
    • Classic Load Balancer
      • routes traffic based on either application or network level information
      • ideal for simple load balancing of traffic across multiple EC2 instances
    • Application Load Balancer
      • routes traffic based on advanced application-level information that includes the content of the request
      • ideal for applications needing advanced routing capabilities, microservices, and container-based architectures.
      • offers the ability to route traffic to multiple services or load balance
        across multiple ports on the same EC2 instance.

Management Tools

AWS CloudWatch

  • is a monitoring and logging service for AWS Cloud resources and the applications running on AWS.
  • can be used to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in the AWS resources.

AWS CloudFormation

  • allows developers and systems administrators to implement “Infrastructure as Code”
  • provides an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion
  • handles the order for provisioning AWS services or the subtleties of making those dependencies work.
  • allows applying version control to the AWS infrastructure the same way its done with software

AWS CloudTrail

  • helps records AWS API calls for the account and delivers log files
  • including API calls made using the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS CloudFormation),
  • recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.
  • enables security analysis, resource change tracking, compliance auditing

AWS Config

  • provides an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance
  • provides Config Rules feature, that enables rules creation that automatically check the configuration of AWS resources
  • helps discover existing and deleted AWS resources, determine overall compliance against rules, and dive into configuration details of a resource at any point in time.
  • enables compliance auditing, security analysis, resource change tracking, and troubleshooting.

AWS OpsWorks

  • configuration management service that uses Chef, an automation platform that treats server configurations as code.
  • uses Chef to automate how servers are configured, deployed, and managed across the EC2 instances or on-premises compute environments.
  • has two offerings, OpsWorks for Chef Automate and OpsWorks Stacks

AWS Service Catalog

  • allows organizations to create and manage catalogs of IT services that are approved for use on AWS.
  • helps centrally manage commonly deployed IT services and helps to achieve consistent governance and meet compliance requirements, while enabling users to quickly deploy only approved IT services they need
  • can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.

AWS Trusted Advisor

  • is an online resource to help reduce cost, increase performance, and improve security by optimizing the AWS environment.
  • provides real-time guidance to help provision the resources following AWS best practices.

AWS Personal Health Dashboard

  • provides alerts and remediation guidance when AWS is experiencing events that might affect you.
  • displays relevant and timely information to help you manage events in progress, and provides proactive notification to help you plan for scheduled activities.
  • alerts are automatically triggered by changes in the health of AWS resources, providing event visibility and guidance to help quickly diagnose and resolve issues.
  • provides a personalized view into the performance and availability of the AWS services underlying the AWS resources.
  • Service Health Dashboard displays the general status of AWS services,

AWS Managed Services

  • provides ongoing management of the AWS infrastructure so the focus can be on applications.
  • helps reduce the operational overhead and risk, by implementing best practices to maintain the infrastructure
  • automates common activities such as change requests, monitoring, patch management, security, and backup services, and provides full-lifecycle services to provision, run, and support the infrastructure.
  • improves agility, reduces cost, and unburdens from infrastructure operations

Developer Tools

AWS CodeCommit

  • is a fully managed source control service that makes to host secure and highly scalable private Git repositories

AWS CodeBuild

  • is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy
  • also helps provision, manage, and scale the build servers.
  • scales continuously and processes multiple builds concurrently, so the builds are not left waiting in a queue.

AWS CodeDeploy

  • is a service that automates code deployments to any instance, including EC2 instances and instances running on premises.
  • helps to rapidly release new features, avoid downtime during application deployment, and handles the complexity of updating the applications.

AWS CodePipeline

  • is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates.
  • builds, tests, and deploys the code every time there is a code change, based on the defined release process models

AWS X-Ray

  • helps developers analyze and debug distributed applications in production or development, such as those built using a microservices architecture
  • provides an end-to-end view of requests as they travel through the application, and shows a map of its underlying components.
  • helps understand how the application and its underlying services are performing, to identify and troubleshoot the root cause of performance issues and errors.

Messaging

Amazon SQS

  • is a fast, reliable, scalable, fully managed message queuing service.
  • makes it simple and cost-effective to decouple the components of a cloud application.
  • includes standard queues with high throughput and at-least-once processing, and FIFO queues
  • provides FIFO (first-in, first-out) delivery and exactly-once processing.

Amazon SNS

  • fast, flexible, fully managed push notification service to send individual messages or to fan-out messages to large numbers of recipients.
  • makes it simple and cost effective to send push notifications to mobile device users, email recipients or even send messages to other distributed services
  • notifications can be sent to Apple, Google, Fire OS, and Windows devices, as well as to Android devices in China with Baidu Cloud Push.
  • can also deliver messages to SQS, Lambda functions, or HTTP endpoint

Amazon SES

  • is a cost-effective email service built on the reliable and scalable infrastructure that Amazon.com developed to serve its own customer
  • can send transactional email, marketing messages, or any other type of high-quality content to the customers.
  • can receive messages and deliver them to an S3 bucket, call your custom code via an AWS Lambda function, or publish notifications to SNS.

Analytics

Amazon Athena

  • is an interactive query service that helps to analyze data in S3 using standard SQL.
  • is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
  • removes the need for complex extract, transform, and load (ETL) jobs

Amazon EMR

  • provides a managed Hadoop framework that makes it easy, fast, and costeffective to process vast amounts of data across dynamically scalable EC2 instances.
  • enables you to run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink, and interact with data in other AWS data stores such as S3 and DynamoDB.
  • securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.

Amazon CloudSearch

  • is a managed service and makes it simple and costeffective to set up, manage, and scale a search solution for website or application.
  • supports 34 languages and popular search features such as highlighting, autocomplete, and geospatial search.

Amazon Elasticsearch Service

  • makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more.
  • is a fully managed service that delivers Elasticsearch’s easy-to-use APIs and real-time capabilities along with the availability, scalability, and security required by production workloads.

Amazon Kinesis

  • is a platform for streaming data on AWS, offering powerful services to make it easy to load and analyze streaming data,
  • provides the ability to build custom streaming data applications for specialized needs.
  • offers three services:
    • Amazon Kinesis Firehose,
      • helps load streaming data into AWS.
      • can capture, transform, and load streaming data into Amazon Kinesis Analytics, S3, Redshift, and Elasticsearch Service, enabling near real-time analytics with existing business intelligence tools and dashboards
      • helps batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
    • Amazon Kinesis Analytics
      • helps process streaming data in real time with standard SQL
    • Amazon Kinesis Streams
      • enables you to build custom applications that process or analyze streaming data for specialized needs.

Amazon Redshift

  • provides a fast, fully managed, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all your data using your existing business intelligence tools.
  • has a massively parallel processing (MPP) data warehouse architecture, parallelizing and distributing SQL operations to take advantage of all available resources.
  • provides underlying hardware designed for high performance data processing, using local attached storage to maximize throughput between the CPUs and drives, and a 10GigE mesh network to maximize throughput between nodes.

Amazon QuickSight

  • provides fast, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data.

AWS Data Pipeline

  • helps reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals
  • can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to AWS services such as S3, RDS, DynamoDB, and EMR.
  • helps create complex data processing workloads that are fault tolerant, repeatable, and highly available.
  • also allows you to move and process data that was previously locked up in on-premises data silos.

AWS Glue

  • is a fully managed ETL service that makes it easy to move data between data stores.
  • helps simplifies and automates the difficult and time-consuming tasks of data discovery, conversion, mapping, and job scheduling.
  • helps schedules ETL jobs and provisions and scales all the infrastructure
  • required so that ETL jobs run quickly and efficiently at any scale.

Application Services

AWS Step Functions

  • makes it easy to coordinate the components of distributed applications and microservices using visual workflows.
  • automatically triggers and tracks each step, and retries when there are errors, so the application executes in order and as expected.

Amazon API Gateway

  • is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
  • handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.

Amazon Elastic Transcoder

  • is media transcoding in the cloud
  • is designed to be a highly scalable, easy-to-use, and cost-effective way for developers and businesses to convert (or transcode) media files from their source format into versions that will play back on devices like smartphones, tablets, and PCs.

Amazon SWF

  • helps developers build, run, and scale background jobs that have parallel or sequential steps.
  • is a fully-managed state tracker and task coordinator in the cloud.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which AWS services belong to the Compute services? Choose 2 answers
    1. Lambda
    2. EC2
    3. S3
    4. EMR
    5. CloudFront
  2. Which AWS service provides low cost storage option for archival and long-term backup?
    1. Glacier
    2. S3
    3. EBS
    4. CloudFront
  3. Which AWS services belong to the Storage services? Choose 2 answers
    1. EFS
    2. IAM
    3. EMR
    4. S3
    5. CloudFront
  4. A Company allows users to upload videos on its platform. They want to convert the videos to multiple formats supported on multiple devices and platforms. Which AWS service can they leverage for the requirement?
    1. AWS SWF
    2. AWS Video Converter
    3. AWS Elastic Transcoder
    4. AWS Data Pipeline
  5. Which analytic service helps analyze data in S3 using standard SQL?
    1. Athena
    2. EMR
    3. Elasticsearch
    4. Kinesis
  6. What features does AWS’s Route 53 service provide? Choose the 2 correct answers:
    1. Content Caching
    2. Domain Name System (DNS) service
    3. Database Management
    4. Domain Registration
  7. You are trying to organize and import (to AWS) gigabytes of data that are currently structured in JSON-like, name-value documents. What AWS service would best fit your needs?
    1. Lambda
    2. DynamoDB
    3. RDS
    4. Aurora
  8. What AWS database is primarily used to analyze data using standard SQL formatting with compatibility for your existing business intelligence tools? Choose the correct answer:
    1. Redshift
    2. RDS
    3. DynamoDB
    4. ElastiCache
  9. A company wants their application to use pre-configured machine image with software installed and configured. which AWS feature can help for the same?
    1. Amazon Machine Image
    2. AWS CloudFormation
    3. AWS Lambda
    4. AWS Lightsail
  10. What AWS service can be used for track API event calls for security analysis, resource change tracking?
    1. AWS CloudWatch
    2. AWS CloudFormation
    3. AWS CloudTrail
    4. AWS OpsWorks
  11. Which AWS service can help Offload the read traffic from your database in order to reduce latency caused by read-heavy workload?
    1. ElastiCache
    2. DynamoDB
    3. S3
    4. EFS
  12. What service allows system administrators to run “Infrastructure as code”?
    1. CloudFormation
    2. CloudWatch
    3. CloudTrail
    4. CodeDeploy

References

AWS_Overview_Whitepaper

AWS Support Plans

AWS Support Plans

AWS provides 4 AWS support plans with additional features with extra costs. The plans are in order of features and the features for lower support plans are available for higher one and not repeated.

NOTE – This post is more relevant for AWS Cloud Practitioner Certification

Basic

Developer

  • Business hours access to Cloud Support Associates via email
  • One primary contact can open Unlimited cases
  • Case Severity/Response times SLA (is in business hours)
    • General guidance < 24 business hours
    • System impaired < 12 business hours
  • General Guidance on Architecture support

Business

  • 24×7 access to Cloud Support Engineers via email, chat & phone
  • Access to Personal Health Dashboard Health API
  • Access to full set of Trusted Advisor checks
  • Allows Unlimited contacts/Unlimited cases (IAM supported) to open cases
  • Case Severity/Response times SLA (is in hours)
    • General guidance < 24 hours
    • System impaired < 12 hours
    • Production system impaired < 4 hours
    • Production system down < 1 hour

Enterprise

  • 24×7 access to Sr. Cloud Support Engineers via email, chat & phone
  • Architecture support with Consultative review and guidance based on your applications
  • Access to a Well-Architected Review delivered by AWS Solution Architects
  • Operations Support for Operational reviews, recommendations, and reporting
  • Access to online self-paced labs
  • Account Assistance by Assigned Support Concierge
  • Proactive Guidance by Designated Technical Account Manager
  • Case Severity/Response times SLA
    • Business-critical system down < 15 minutes

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which AWS support plan has a dedicated technical account manager assigned for proactive guidance?
    1. AWS Basic support plan
    2. AWS Developer support plan
    3. AWS Business support plan
    4. AWS Enterprise support plan
  2. Which feature is available for all the AWS support plans?
    1. Technical Account Manager
    2. Assigned Support Concierge
    3. 24×7 access to customer service
    4. Access to Cloud Support resources

References

AWS_Support_Plans

Architecting for the Cloud – AWS Best Practices – Whitepaper – Certification

Architecting for the Cloud – AWS Best Practices

Architecting for the Cloud – AWS Best Practices whitepaper provides architectural patterns and advice on how to design systems that are secure, reliable, high performing, and cost efficient

AWS Design Principles

Scalability

  • While AWS provides virtually unlimited on-demand capacity, the architecture should be designed to take advantage of those resources
  • There are two ways to scale an IT architecture
    • Vertical Scaling
      • takes place through increasing specifications of an individual resource for e.g. updating EC2 instance type with increasing RAM, CPU, IOPS, or networking capabilities
      • will eventually hit a limit, and is not always a cost effective or highly available approach
    • Horizontal Scaling
      • takes place through increasing number of resources for e.g. adding more EC2 instances or EBS volumes
      • can help leverage the elasticity of cloud computing
      • not all the architectures can be designed to distribute their workload to multiple resources
      • applications designed should be stateless,
        • that needs no knowledge of previous interactions and stores no session information
        • capacity can be increased and decreased, after running tasks have been drained
      • State, if needed, can be implemented using
        • Low latency external store, for e.g. DynamoDB, Redis, to maintain state information
        • Session affinity, for e.g. ELB sticky sessions, to bind all the transactions of a session to a specific compute resource. However, it cannot be guaranteed or take advantage of newly added resources for existing sessions
      • Load can be distributed across multiple resources using
        • Push model, for e.g. through ELB where it distributes the load across multiple EC2 instances
        • Pull model, for e.g. through SQS or Kinesis where multiple consumers subscribe and consume
      • Distributed processing, for e.g. using EMR or Kinesis, helps process large amounts of data by dividing task and its data into many small fragments of works

Disposable Resources Instead of Fixed Servers

  • Resources need to be treated as temporary disposable resources rather than fixed permanent on-premises resources before
  • AWS focuses on the concept of Immutable infrastructure
    • servers once launched, is never updated throughout its lifetime.
    • updates can be performed on a new server with latest configurations,
    • this ensures resources are always in a consistent (and tested) state and easier rollbacks
  • AWS provides multiple ways to instantiate compute resources in an automated and repeatable way
    • Bootstraping
      • scripts to configure and setup for e.g. using data scripts and cloud-init to install software or copy resources and code
    • Golden Images
      • a snapshot of a particular state of that resource,
      • faster start times and removes dependencies to configuration services or third-party repositories
    • Containers
      • AWS support for docker images through Elastic Beanstalk and ECS
      • Docker allows packaging a piece of software in a Docker Image, which is a standardized unit for software development, containing everything the software needs to run: code, runtime, system tools, system libraries, etc
  • Infrastructure as Code
    • AWS assets are programmable, techniques, practices, and tools from software development can be applied to make the whole infrastructure reusable, maintainable, extensible, and testable.
    • AWS provides services like CloudFormation, OpsWorks for deployment

Automation

  • AWS provides various automation tools and services which help improve system’s stability, efficiency and time to market.
    • Elastic Beanstalk
      • a PaaS that allows quick application deployment while handling resource provisioning, load balancing, auto scaling, monitoring etc
    • EC2 Auto Recovery
      • creates CloudWatch alarm that monitors an EC2 instance and automatically recovers it if it becomes impaired.
      • A recovered instance is identical to the original instance, including the instance ID, private & Elastic IP addresses, and all instance metadata.
      • Instance is migrated through reboot, in memory contents are lost.
    • Auto Scaling
      • allows maintain application availability and scale the capacity up or down automatically as per defined conditions
    • CloudWatch Alarms
      • allows SNS triggers to be configured when a particular metric goes beyond a specified threshold for a specified number of periods
    • CloudWatch Events
      • allows real-time stream of system events that describe changes in AWS resources
    • OpsWorks
      • allows continuous configuration through lifecycle events that automatically update the instances’ configuration to adapt to environment changes.
      • Events can be used to trigger Chef recipes on each instance to perform specific configuration tasks
    • Lambda Scheduled Events
      • allows Lambda function creation and direct AWS Lambda to execute it on a regular schedule.

Loose Coupling

  • AWS helps loose coupled architecture that reduces interdependencies, a change or failure in a component does not cascade to other components
    • Asynchronous Integration
      • does not involve direct point-to-point interaction but usually through an intermediate durable storage layer for e.g. SQS, Kinesis
      • decouples the components and introduces additional resiliency
      • suitable for any interaction that doesn’t need an immediate response and an ack that a request has been registered will suffice
    • Service Discovery
      • allows new resources to be launched or terminated at any point in time and discovered as well for e.g. using ELB as a single point of contact with hiding the underlying instance details or Route 53 zones to abstract load balancer’s endpoint
    • Well-Defined Interfaces
      • allows various components to interact with each other through specific, technology agnostic interfaces for e.g. RESTful apis with API Gateway 

Services, Not Servers

Databases

  • AWS provides different categories of database technologies
    • Relational Databases (RDS)
      • normalizes data into well-defined tabular structures known as tables, which consist of rows and columns
      • provide a powerful query language, flexible indexing capabilities, strong integrity controls, and the ability to combine data from multiple tables in a fast and efficient manner
      • allows vertical scalability by increasing resources and horizontal scalability using Read Replicas for read capacity and sharding or data partitioning for write capacity
      • provides High Availability using Multi-AZ deployment, where data is synchronously replicated
    • NoSQL Databases (DynamoDB)
      • provides databases that trade some of the query and transaction capabilities of relational databases for a more flexible data model that seamlessly scales horizontally
      • perform data partitioning and replication to scale both the reads and writes in a horizontal fashion
      • DynamoDB service synchronously replicates data across three facilities in an AWS region to provide fault tolerance in the event of a server failure or Availability Zone disruption
    • Data Warehouse (Redshift)
      • Specialized type of relational database, optimized for analysis and reporting of large amounts of data
      • Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing (MPP), columnar data storage, and targeted data compression encoding schemes
      • Redshift MPP architecture enables increasing performance by increasing the number of nodes in the data warehouse cluster
  • For more details refer to AWS Storage Options Whitepaper

Removing Single Points of Failure

  • AWS provides ways to implement redundancy, automate recovery and reduce disruption at every layer of the architecture
  • AWS supports redundancy in the following ways
    • Standby Redundancy
      • When a resource fails, functionality is recovered on a secondary resource using a process called failover.
      • Failover will typically require some time before it completes, and during that period the resource remains unavailable.
      • Secondary resource can either be launched automatically only when needed (to reduce cost), or it can be already running idle (to accelerate failover and minimize disruption).
      • Standby redundancy is often used for stateful components such as relational databases.
    • Active Redundancy
      • requests are distributed to multiple redundant compute resources, if one fails, the rest can simply absorb a larger share of the workload.
      • Compared to standby redundancy, it can achieve better utilization and affect a smaller population when there is a failure.
  • AWS supports replication
    • Synchronous replication
      • acknowledges a transaction after it has been durably stored in both the primary location and its replicas.
      • protects data integrity from the event of a primary node failure
      • used to scale read capacity for queries that require the most up-to-date data (strong consistency).
      • compromises performance and availability
    • Asynchronous replication
      • decouples the primary node from its replicas at the expense of introducing replication lag
      • used to horizontally scale the system’s read capacity for queries that can tolerate that replication lag.
    • Quorum-based replication
      • combines synchronous and asynchronous replication to overcome the challenges of large-scale distributed database systems
      • Replication to multiple nodes can be managed by defining a minimum number of nodes that must participate in a successful write operation
  • AWS provide services to reduce or remove single point of failure
    • Regions, Availability Zones with multiple data centers
    • ELB or Route 53 to configure health checks and mask failure by routing traffic to healthy endpoints
    • Auto Scaling to automatically replace unhealthy nodes
    • EC2 auto-recovery to recover unhealthy impaired nodes
    • S3, DynamoDB with data redundantly stored across multiple facilities
    • Multi-AZ RDS and Read Replicas
    • ElastiCache Redis engine supports replication with automatic failover
  • For more details refer to AWS Disaster Recovery Whitepaper

Optimize for Cost

  • AWS can help organizations reduce capital expenses and drive savings as a result of the AWS economies of scale
  • AWS provides different options which should be utilized as per use case –
    • EC2 instance types – On Demand, Reserved and Spot
    • Trusted Advisor or EC2 usage reports to identify the compute resources and their usage
    • S3 storage class – Standard, Reduced Redundancy, and Standard-Infrequent Access
    • EBS volumes – Magnetic, General Purpose SSD, Provisioned IOPS SSD
    • Cost Allocation tags to identify costs based on tags
    • Auto Scaling to horizontally scale the capacity up or down based on demand
    • Lambda based architectures to never pay for idle or redundant resources
    • Utilize managed services where scaling is handled by AWS for e.g. ELB, CloudFront, Kinesis, SQS, CloudSearch etc.

Caching

  • Caching improves application performance and increases the cost efficiency of an implementation
    • Application Data Caching
      • provides services thats helps store and retrieve information from fast, managed, in-memory caches
      • ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud and supports two open-source in-memory caching engines: Memcached and Redis
    • Edge Caching
      • allows content to be served by infrastructure that is closer to viewers, lowering latency and giving high, sustained data transfer rates needed to deliver large popular objects to end users at scale.
      • CloudFront is Content Delivery Network (CDN) consisting of multiple edge locations, that allows copies of static and dynamic content to be cached

Security

  • AWS works on shared security responsibility model
    • AWS is responsible for the security of the underlying cloud infrastructure
    • you are responsible for securing the workloads you deploy in AWS
  • AWS also provides ample security features
    • IAM to define a granular set of policies and assign them to users, groups, and AWS resources
    • IAM roles to assign short term credentials to resources, which are automatically distributed and rotated
    • Amazon Cognito, for mobile applications, which allows client devices to get controlled access to AWS resources via temporary tokens.
    • VPC to isolate parts of infrastructure through the use of subnets, security groups, and routing controls
    • WAF to help protect web applications from SQL injection and other vulnerabilities in the application code
    • CloudWatch logs to collect logs centrally as the servers are temporary
    • CloudTrail for auditing AWS API calls, which delivers a log file to S3 bucket. Logs can then be stored in an immutable manner and automatically processed to either notify or even take action on your behalf, protecting your organization from non-compliance
    • AWS Config, Amazon Inspector, and AWS Trusted Advisor to continually monitor for compliance or vulnerabilities giving a clear overview of which IT resources are in compliance, and which are not
  • For more details refer to AWS Security Whitepaper

References

Architecting for the Cloud: AWS Best Practices – Whitepaper