AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Learning Path

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Learning Path

AWS Certified SysOps Administrator – Associate (SOA-C01) exam is the latest AWS exam and has already replaced the old SysOps Administrator – Associate exam from 24th Sept 2018. It basically validates

  • Deploy, manage, and operate scalable, highly available, and fault tolerant systems on AWS
  • Implement and control the flow of data to and from AWS
  • Select the appropriate AWS service based on compute, data, or security requirements
  • Identify appropriate use of AWS operational best practices
  • Estimate AWS usage costs and identify operational cost control mechanisms
  • Migrate on-premises workloads to AWS

Refer AWS Certified SysOps – Associate Exam Guide Sep 18

AWS Certified SysOps Administrator - Associate Content Outline

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Summary

  • AWS Certified SysOps Administrator – Associate exam is quite different from the previous one with more focus on the error handling, deployment, monitoring.
  • AWS Certified SysOps Administrator – Associate exam covers a lot of latest AWS services like ALB, Lambda, AWS Config, AWS Inspector, AWS Shield while focusing majorly on other services like CloudWatch, Metrics from various services, CloudTrail.
  • Be sure to cover the following topics
    •  Monitoring & Management Tools
      • Understand CloudWatch monitoring to provide operational transparency
        • Know which EC2 metrics it can track (disk, network, CPU, status checks) and which would need custom metrics (memory, disk swap, disk storage etc.)
        • Know ELB monitoring
          • Classic Load Balancer metrics SurgeQueueLength and SpilloverCount
          • Reasons for 4XX and 5XX errors
      • Understand CloudTrail for audit and governance
      • Understand AWS Config and its use cases
      • Understand AWS Systems Manager and its various services like parameter store, patch manager
      • Understand AWS Trusted Advisor and what it provides
      • Very important to understand AWS CloudWatch vs AWS CloudTrail vs AWS Config
      • Very important to understand Trust Advisor vs Systems manager vs Inspector
      • Know Personal Health Dashboard & Service Health Dashboard
      • Deployment tools
        • Know AWS OpsWorks and its ability to support chef & puppet
        • Know Elastic Beanstalk and its advantages
        • Understand AWS CloudFormation
          • Know stacks, templates, nested stacks
          • Know how to wait for resources setup to be completed before proceeding esp. cfn-signal
          • Know how to retain resources (RDS, S3), prevent rollback in case of a failure
    • Networking & Content Delivery
      • Understand VPC in depth
        • Understand the difference between
          • Bastion host – allow access to instances in private subnet
          • NAT – route traffic from private subnets to internet
          • NAT instance vs NAT Gateway
          • Internet Gateway – Access to internet
          • Virtual Private Gateway – Connectivity between on-premises and VPC
          • Egress-Only Internet Gateway – relevant to IPv6 only to allow egress traffic from private subnet to internet, without allowing ingress traffic
        • Understand
        • Understand how VPC Peering works and limitations
        • Understand VPC Endpoints and supported services
        • Ability to debug networking issues like EC2 not accessible, EC2 instances not reachable, Instances in subnets not able to communicate with others or Internet.
      • Understand Route 53 and Routing Policies and their use cases
        • Focus on Weighted, Latency routing policies
      • Understand VPN and Direct Connect and their use cases
      • Understand CloudFront and use cases
      • Understand ELB, ALB and NLB and what features they provide like
        • ALB provides content and path routing
        • NLB provides ability to give static IPs to load balancer.
    • Compute
      • Understand EC2 in depth
        • Understand EC2 instance types
        • Understand EC2 purchase options esp. spot instances and improved reserved instances options.
        • Understand how IO Credits work and T2 burstable performance and T2 unlimited
        • Understand EC2 Metadata & Userdata. Whats the use of each? How to look up instance data after it is launched.
        • Understand EC2 Security. 
          • How IAM Role work with EC2 instances
          • IAM Role can now be attached to stopped and runnings instances
        • Understand AMIs and remember they are regional and how can they be shared with others.
        • Troubleshoot issues with launching EC2 esp. RequestLimitExceeded, InstanceLimitExceeded etc.
        • Troubleshoot connectivity, lost ssh keys issues
      • Understand Auto Scaling
      • Understand Lambda and its use cases
      • Understand Lambda with API Gateway
    • Storage
    • Databases
    • Security
      • Understand IAM as a whole
      • Understand KMS for key management and envelope encryption
      • Understand CloudHSM and KMS vs CloudHSM esp. support for symmetric and asymmetric keys
      • Know AWS Inspector and its use cases
      • Know AWS GuardDuty as managed threat detection service. Will help eliminate as the option
      • Know AWS Shield esp. the Shield Advanced option and the features it provides
      • Know WAF as Web Traffic Firewall
      • Know AWS Artifact as on-demand access to compliance reports
    • Integration Tools
      • Understand SQS as message queuing service and SNS as pub/sub notification service
        • Focus on SQS as a decoupling service
        • Understand SQS FIFO, make sure you know the differences between standard and FIFO
      • Understand CloudWatch integration with SNS for notification
    • Cost management

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Resources

AWS Cloud Computing Whitepapers

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Contents

Domain 1: Monitoring and Reporting

    1. Create and maintain metrics and alarms utilizing AWS monitoring services
    1. Recognize and differentiate performance and availability metrics
  1. Perform the steps necessary to remediate based on performance and availability metrics

Domain 2: High Availability

    1. Implement scalability and elasticity based on use case
  1. Recognize and differentiate highly available and resilient environments on AWS

Domain 3: Deployment and Provisioning

    1. Identify and execute steps required to provision cloud resources
  1. Identify and remediate deployment issues

Domain 4: Storage and Data Management

    1. Create and manage data retention
  1. Identify and implement data protection, encryption, and capacity planning needs

Domain 5: Security and Compliance

    1. Implement and manage security policies on AWS
    1. Implement access controls when using AWS
  1. Differentiate between the roles and responsibility within the shared responsibility model

Domain 6: Networking

    1. Apply AWS networking features
    1. Implement connectivity services of AWS
  1. Gather and interpret relevant information for network troubleshooting

Domain 7: Automation and Optimization

  1. Use AWS services and features to manage and assess resource utilization
  2. Employ cost-optimization strategies for efficient resource utilization
  3. Automate manual or repeatable process to minimize management overhead

AWS Certified Developer – Associate June 2018 Exam Learning Path

AWS Certified Developer – Associate June 2018 Exam Learning Path

AWS Certified Developer – Associate June 2018 exam is the latest AWS exam and would replace the old Developer – Associate exam. It basically validates

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices.
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications using AWS.

Refer AWS Certified Developer – Associate (Released June 2018) Exam Blue Print

AWS Certified Developer - Associate June 2018 Domains

AWS Certified Developer – Associate June 2018 Summary

  • AWS Certified Developer – Associate June 2018 exam is quite different from the previous one with more focus on the hands-on development and deployment concepts rather then just the architectural concepts
  • AWS Certified Developer – Associate June 2018 exam covers a lot of latest AWS services like Lambda, X-Ray while focusing majorly on other services like DynamoDB, Elastic Beanstalk, S3, EC2
  • Be sure to cover the following topics
    • Compute
      • Understand what AWS services you can use to build a serverless architecture?
      • Make sure you know and understand Lambda and serverless architecture, its features and use cases.
      • Know Lambda limits for e.g. execution time, deployable zipped and unzipped package limit
      • Be sure to know how to deploy, package using Lambda.
      • Understand tracing of Lambda functions using X-Ray
      • Understand integration of Lambda with CloudWatch.
      • Understand how to handle multiple releases using Alias
      • Know AWS Step Functions to manage Lambda functions flow
      • Understand Lambda with API Gateway
      • Understand API Gateway stages, ability to cater to different environments for e.g. dev, test, prod
      • Understand EC2 as a whole
      • Understand EC2 Metadata & Userdata. Whats the use of each? How to look up instance data after it is launched.
      • Understand EC2 Security. How IAM Role work with EC2 instances.
      • Understand how does EC2 evaluates the order of credentials, when multiple are provided. Remember the order – Environment variables -> Java system properties -> Default credential profiles file -> ECS container credentials -> Instance Profile credentials
      • Know Elastic Beanstalk at a high level, what it provides and its ability to get an application running quickly
      • Understand Elastic Beanstalk configurations and deployment types with their advantages and disadvantages
    • Databases
      • Understand relational and NoSQLs data storage options which include RDS, DynamoDB and their use cases
      • Understand DynamoDB Secondary Indexes
      • Make sure you understand DynamoDB provisioned throughput for Read/Writes and its calculations
      • Make sure you understand DynamoDB Consistency Model – difference between Strongly Consistent and Eventual Consistency
      • Understand DynamoDB with its low latency performance, DAX
      • Know how to configure fine grained security for DynamoDB table, items, attributes
      • Understand DynamoDB Best Practices regarding
        • table design
        • provisioned throughput
        • Query vs Scan operations
        • improving Scan operation performance
      • Understand RDS features – Read Replicas for scalability, Multi-AZ for High Availability
      • Know ElastiCache use cases, mainly for caching performance
      • Understand ElastiCache Redis vs Memcached
    • Storage
      • Understand S3 storage option
      • Understand S3 Best Practices to improve performance for GET/PUT requests
      • Understand S3 features like different storage classes with lifecycle policies, static website hosting, versioning, Pre-Signed URLs for both upload and download, CORS
    • Security
      • Understand IAM as a whole
      • Focus on IAM role and its use case especially with EC2 instance
      • Know how to test and validate IAM policies
      • Understand IAM identity providers and federation and use cases
      • Understand how AWS Cognito works and what features it provides
      • Understand MFA and How would implement two factor authentication for your application
      • Understand KMS for key management and envelope encryption
      • Know what services support KMS
        • Remember SQS, Kinesis now provides SSE support
      • Focus on S3 with SSE, SSE-C, SSE-KMS. How they work and differ?
      • Know how can you enforce only buckets to only accept encrypted objects
      • Know various KMS encryption options encrypt, reencrypt, generateEncryptedDataKey etc
      • Know how KMS impacts performance of the services
    • Management Tools
      • Understand CloudWatch monitoring to provide operational transparency
      • Know which EC2 metrics it can track.
      • Understand CloudWatch is extendable with custom metrics
      • Understand CloudTrail for Audit
    • Integration Tools
      • Understand SQS as message queuing service and SNS as pub/sub notification service
      • Understand SQS features like visibility, long poll vs short poll
      • Focus on SQS as a decoupling service
      • AWS has released SQS FIFO, make sure you know the differences between standard and FIFO
      • Know the different development and deployment tools like CodeCommit, CodeBuild, CodeDeploy, CodePipeline
    • Networking
      • Does not cover much on networking or designing of networks, but be sure you understand VPC, Subnets, Routes, Security Groups etc.

AWS Certified Developer – Associate June 2018 Exam Resources

AWS Cloud Computing Whitepapers

AWS Certified Developer – Associate June 2018 Exam Contents

Domain 1: Deployment

  1. Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
  2. Deploy applications using Elastic Beanstalk.
  3. Prepare the application deployment package to be deployed to AWS.
  4. Deploy serverless applications.

Domain 2: Security

  1. Make authenticated calls to AWS services.
  2. Implement encryption using AWS services.
  3. Implement application authentication and authorization.

Domain 3: Development with AWS Services

  1. Write code for serverless applications.
  2. Translate functional requirements into application design.
  3. Implement application design into application code.
  4. Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.

Domain 4: Refactoring

  1. Optimize application to best use AWS services and features.
  2. Migrate existing application code to run on AWS.

Domain 5: Monitoring and Troubleshooting

  1. Write code that can be monitored.
  2. Perform root cause analysis on faults found in testing or production.

AWS RDS Aurora – Certification

AWS Aurora

  • AWS Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases.
  • Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine i.e. applications developed with MySQL can switch to Aurora with little or no changes
  • Aurora delivers up to 5x performance of MySQL without requiring any changes to most MySQL applications
  • Aurora PostgreSQL delivers up to 3x performance of PostgreSQL.
  • RDS manages the Aurora databases, handling time-consuming tasks such as provisioning, patching, backup, recovery, failure detection and repair.
  • Based on the database usage, Aurora storage will automatically grow, from 10GB to 64TB in 10GB increments with no impact to database performance

NOTE – AWS Aurora is covered in the latest Solution Architect – Associate Feb 2018 extensively, so be sure to cover the same.

AWS Aurora Architecture

High Availability and Replication

  • Aurora provides data durability and reliability by replicating the database volume six ways across three Availability Zones in a single region
    • Aurora automatically divides the database volume into 10GB segments spread across many disks.
    • Each 10GB chunk of your database volume is replicated six ways, across three Availability Zones
  • RDS databases for e.g. MySQL, Oracle etc. have the data in a single AZ
  • Aurora is designed to transparently handle the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability.
  • Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and repaired automatically.
  • Aurora Replicas share the same underlying volume as the primary instance. Updates made by the primary are visible to all Aurora Replicas
  • As Aurora Replicas share the same data volume as the primary instance, there is virtually no replication lag
  • Any Aurora Replica can be promoted to become primary without any data loss and therefore can be used for enhancing fault tolerance in the event of a primary DB Instance failure.
  • To increase database availability, 1 to 15 replicas can be created in any of 3 AZs, and RDS will automatically include them in failover primary selection in the event of a database outage.

Security

  • Aurora uses SSL (AES-256) to secure the connection between the database instance and the application
  • Aurora allows database encryption using keys managed through AWS Key Management Service (KMS).
  • Encryption and decryption are handled seamlessly.
  • With Aurora encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, snapshots, and replicas in the same cluster.
  • Encryption of existing unencrypted Aurora instance is not supported. Create a new encrypted Aurora instance and migrate the data

Backup and Restore

  • Automated backups are always enabled on Aurora DB Instances.
  • Backups do not impact database performance.
  • Aurora also allows creation of manual snapshots
  • Aurora automatically maintains 6 copies of your data across 3 AZs and will automatically attempt to recover your database in a healthy AZ with no data loss.
  • If in any case the data is unavailable within Aurora storage,
    • DB Snapshot can be restored or
    • point-in-time restore operation can be performed to a new instance. Latest restorable time for a point-in-time restore operation can be up to 5 minutes in the past.
  • Restoring a snapshot creates a new Aurora DB instance
  • Deleting Aurora database deletes all the automated backups (with an option to create a final snapshot), but would not remove the manual snapshots.
  • Snapshots (including encrypted ones) can be shared with another AWS accounts

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Company wants to use MySQL compatible relational database with greater performance. Which AWS service can be used?
    1. Aurora
    2. RDS
    3. SimpleDB
    4. DynamoDB
  2. An application requires a highly available relational database with an initial storage capacity of 8 TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads. Which option will meet these requirements?
    1. DynamoDB
    2. Amazon S3
    3. Amazon Aurora
    4. Amazon Redshift
  3. A company is migrating their on-premise 10TB MySQL database to AWS. As a compliance requirement, the company wants to have the data replicated across three availability zones. Which Amazon RDS engine meets the above business requirement?
    1. Use Multi-AZ RDS
    2. Use RDS
    3. Use Aurora
    4. Use DynamoDB

AWS X-Ray – Certification

AWS X-Ray

  • AWS X-Ray helps developers analyze and debug production, distributed applications for e.g. built using a microservices lambda architecture
  • X-Ray provides an end-to-end view of requests as they travel through the application, and shows a map of the application’s underlying components.
  • X-Ray helps to understand how the application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors.
  • X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.
  • X-Ray can be used with distributed applications of any size to trace and debug both synchronous requests and asynchronous events.
  • X-Ray can be used to track requests flowing through applications or services across multiple regions. X-Ray data is stored locally to the processed region and customers can build a solution over it to combined the data
  • Trace data sent to X-Ray is generally available for retrieval and filtering within 30 seconds of it being received by the service.
  • X-Ray stores trace data for the last 30 days. This enables you to query trace data going back 30 days.
  • Integration
    • X-Ray integrates with applications running on EC2, ECS, Lambda, and Elastic Beanstalk.
    • X-Ray SDK automatically captures metadata for API calls made to AWS services using the AWS SDK
    • X-Ray SDK provides add-ons for MySQL and PostgreSQL drivers.
    • For Elastic Beanstalk, include the language-specific X-Ray libraries in your application code.
    • Applications running on other AWS services, such as EC2 or ECS, install the X-Ray agent and instrument the application code

X-Ray Core Concepts

  • Trace
    • An X-Ray trace is a set of data points that share the same trace ID.
    • Trace helps track the request, which is assigned a unique trace id, while it navigates through services
    • Piece of information relayed by each service in the application to X-Ray is a segment, and a trace is a collection of segments.
  • Segment
    • An X-Ray segment encapsulates all the data points for a single component of the distributed application for e.g. authorization component
    • Segments include system-defined and user-defined data in the form of annotations and are composed of one or more sub-segments that represent remote calls made from the service.  for e.g. database call and its result within the overall request/response
  • Annotation
    • An X-Ray annotation is system-defined or user-defined data
    • Annotation is associated with a segment and a segment can contain multiple annotations.
    • System-defined annotations include data added to the segment by AWS services
    • User-defined annotations are metadata added to a segment by a developer
  • Errors
    • X-Ray errors are system annotations associated with a segment for a call that results in an error response.
    • Error includes the error message, stack trace, and any additional information for e.g, version to associate the error with a source file.
  • Sampling
    • X-Ray collects data for significant number of requests, instead of each request sent to an application, for performant and cost-effectiveness
    • X-Ray should not be used as an audit or compliance tool because it does not guarantee data completeness.
  • X-Ray agent
    • X-Ray agent helps collect data from log files and sends them to the X-Ray service for aggregation, analysis, and storage.
    • Agent makes it easier for you to send data to the X-Ray service, instead of using the APIs directly, and is
    • Agent is available for Amazon Linux AMI, Red Hat Enterprise Linux (RHEL), and Windows Server 2012 R2 or later operating systems.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is facing performance issues with their microservices architecture deployed on AWS. Which service can help them debug and analyze the issue? [CCP]
    1. AWS Inspector
    2. CodeDeploy
    3. X-Ray
    4. AWS Config

AWS DynamoDB Advanced – Certification

AWS DynamoDB Advanced Features

This post covers DynamoDB advanced features

DynamoDB Secondary Indexes

DynamoDB supports Local and Global Secondary Indexes.

Refer to My Blog Post about AWS DynamoDB Secondary Indexes

DynamoDB Cross-region Replication

  • DynamoDB cross-region replication allows identical copies (called replicas) of a DynamoDB table (called master table) to be maintained in one or more AWS regions
  • Writes to the table will be automatically propagated to all replicas
  • Cross-region replication currently supports single master mode. A single master has one master table and one or more replica tables
  • Read replicas are updated asynchronously as DynamoDB acknowledges a write operation as successful once it has been accepted by the master table. The write will then be propagated to each replica with a slight delay.
  • Cross-region replication can be helpful in scenarios
    • Efficient disaster recovery, in case a data center failure occurs.
    • Faster reads, for customers in multiple regions by delivering data faster by reading a DynamoDB table from the closest AWS data center.
    • Easier traffic management, to distribute the read workload across tables and thereby consume less read capacity in the master table.
    • Easy regional migration, by promoting a read replica to master
    • Live data migration, to replicate data and when the tables are in sync, switch the application to write to the destination region
  • Cross-region replication costing depends on
    • Provisioned throughput (Writes and Reads)
    • Storage for the replica tables.
    • Data Transfer across regions
    • Reading data from DynamoDB Streams to keep the tables in sync.
    • Cost of EC2 instances provisioned, depending upon the instance types and region, to host the replication process.
  • NOTE : Cross Region replication on DynamoDB was performed defining AWS Data Pipeline job which used EMR internally to transfer data before the DynamoDB streams and out of box cross region replication support

DynamoDB Global Tables

  • DynamoDB Global Tables is a new multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
  • Applications can now perform reads and writes to DynamoDB in AWS regions around the world, with changes in any region propagated to every region where a table is replicated.
  • Global Tables help in building applications to advantage of data locality to reduce overall latency.
  • Global Tables ensures eventual consistency
  • Global Tables replicates data among regions within a single AWS account, and currently does not support cross account access

DynamoDB Streams

  • DynamoDB Streams provides a time-ordered sequence of item-level changes made to data in a table in the last 24 hours, after which they are erased
  • DynamoDB Streams maintains ordered sequence of the events per item however across item are not maintained
  • Example
    • For e.g., suppose that you have a DynamoDB table tracking high scores for a game and that each item in the table represents an individual player. If you make the following three updates in this order:
      • Update 1: Change Player 1’s high score to 100 points
      • Update 2: Change Player 2’s high score to 50 points
      • Update 3: Change Player 1’s high score to 125 points
    • DynamoDB Streams will maintain the order for Player 1 score events. However, it would not maintain the order across the players. So Player 2 score event is not guaranteed between the 2 Player 1 events
  • DynamoDB streams can be used for multi-region replication to keep other data stores up-to-date with the latest changes to DynamoDB or to take actions based on the changes made to the table
  • DynamoDB Streams APIs helps developers consume updates and receive the item-level data before and after items are changed
  • DynamoDB Streams allows read at up to twice the rate of the provisioned write capacity of the DynamoDB table
  • DynamoDB Streams have to be enabled on a per-table basis
  • DynamoDB Streams is designed so that every update made to the table will be represented exactly once in the stream
  • DynamoDB Streams is designed so that every update made to the table will be represented exactly once in the stream. No Duplicates

DynamoDB Triggers

  • DynamoDB Triggers (just like database triggers) is a feature which allows execution of custom actions based on item-level updates on a table
  • DynamoDB triggers can be used in scenarios like sending notifications, updating an aggregate table, and connecting DynamoDB tables to other data sources
  • DynamoDB Trigger flow
    • Custom logic for a DynamoDB trigger is stored in an AWS Lambda function as code.
    • A trigger for a given table can be created by associating an AWS Lambda function to the stream (via DynamoDB Streams) on a table.
    • When the table is updated, the updates are published to DynamoDB Streams.
    • In turn, AWS Lambda reads the updates from the associated stream and executes the code in the function.

DynamoDB Accelerator DAX

  • DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second.
  • DAX does all the heavy lifting required to add in-memory acceleration to the tables, without requiring developers to manage cache invalidation, data population, or cluster management.
  • DAX is fault-tolerant and scalable.
  • DAX cluster has a primary node and zero or more read-replica nodes. Upon a failure for a primary node, DAX will automatically fail over and elect a new primary. For scaling, add or remove read replicas

VPC Endpoints

  • VPC endpoints for DynamoDB improve privacy and security, especially those dealing with sensitive workloads with compliance and audit requirements, by enabling private access to DynamoDB from within a VPC without the need for an internet gateway or NAT gateway.
  • VPC endpoints for DynamoDB support IAM policies to simplify DynamoDB access control, where access can be restricted to a specific VPC endpoint.
  • VPC endpoints can be created only for Amazon DynamoDB tables in the same AWS Region as the VPC
  • DynamoDB Streams cannot be access using VPC endpoints for DynamoDB

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. What are the services supported by VPC endpoints, using Gateway endpoint type? Choose 2 answers
    1. Amazon S3
    2. Amazon EFS
    3. Amazon DynamoDB
    4. Amazon Glacier
    5. Amazon SQS
  2. A company has setup an application in AWS that interacts with DynamoDB. DynamoDB is currently responding in milliseconds, but the application response guidelines require it to respond within microseconds. How can the performance of DynamoDB be further improved? [SAA-C01]
    1. Use ElastiCache in front of DynamoDB
    2. Use DynamoDB inbuilt caching
    3. Use DynamoDB Accelerator
    4. Use RDS with ElastiCache instead

AWS Certified Solutions Architect – Associate Feb 2018 Exam Learning Path

AWS Certified Solutions Architect –

Associate Feb 2018 Exam Learning Path

AWS Solutions Architect – Associate Feb 2018 exam is the latest AWS exam and would replace the old CSA-Associate exam. It basically validates the ability to effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS technologies

  • Define a solution using architectural design principles based on customer requirements.
  • Provide implementation guidance based on best practices to the organization throughout the life cycle of the project.

Refer AWS_Solution_Architect_-_Associate_Feb_2018_Exam_Blue_Print

AWS Certified Solutions Architect - Associate February 2018

AWS Solutions Architect – Associate (Feb 2018) Exam Summary

  • AWS has updated the exam concepts from the focus being on individual services to more building of scalable, highly available, cost-effective, performant, resilient and operational effective architecture
  • Although, most of the services covered by the the old exam are the same. There are few new additions like API Gateway, Lambda, ECS, Aurora
  • Exam surely covers the architecture aspects in deep, so you must be able to visualize the architecture, even draw them out in the exam just to understand how it would work and how different services relate.
  • Be sure to cover the following topics
    • Networking
      • Be sure to create VPC from scratch. This is mandatory.
        • Create VPC and understand whats an CIDR.
        • Create public and private subnets, configure proper routes, security groups, NACLs.
        • Create Bastion for communication with instances
        • Create NAT Gateway or Instances for instances in private subnets to interact with internet
        • Create two tier architecture with application in public and database in private subnets
        • Create three tier architecture with web servers in public, application and database servers in private.
        • Make sure to understand how the communication happens between Internet, Public subnets, Private subnets, NAT, Bastion etc.
      • Understand VPC endpoints and what services it can help interact
      • Understand difference between NAT Gateway and NAT Instance
      • Understand how NAT high availability can be achieved
      • Understand CloudFront as CDN and the static and dynamic caching it provides, what can be its origin (it can point to on-premises sources)
      • Understand Route 53 for routing, health checks and various routing policies it provides and their use cases mainly for high availability
      • Be sure to cover ELB in deep. AWS has introduced ALB and NLB and there are lot of questions on ALB
      • Understand ALB features with its ability for content based and URL based routing with support for dynamic port mapping with ECS
    • Storage
      • Understand various storage options S3, EBS, Instance store, EFS, Glacier and what are the use cases and anti patterns for each
      • Would recommend referring Storage Options whitepaper, although a bit dated 90% still holds right
      • Understand various EBS volume types and their use cases in terms of IOPS and throughput. SSD for IOPS and HDD for throughput
      • Understand Burst performance and I/O credits to handle occasional peaks
      • Understand S3 features like different storage classes with lifecycle policies, static website hosting, versioning, Pre-Signed URLs for both upload and download, CORS
      • Understand Glacier as an archival storage with various retrieval patterns
      • Glacier Expedited retrieval now allows object retrieval within mins
      • Understand Storage gateway and its different types
    • Compute
      • Understand EC2 as a whole
      • Understand Auto Scaling and ELB, how they work together to provide High Available and Scalable solution
      • Understand EC2 various purchase types – Reserved, On-demand and Spot and their use cases
      • Understand Reserved purchase types with the introduction of Scheduled and Convertible types
      • Understand Lambda and serverless architecture, its features and use cases. How do you benefit from Lambda?
      • Understand ECS with its ability to deploy containers and micro services architecture
      • Know Elastic Beanstalk at a high level, what it provides and its ability to get an application running quickly
    • Databases
      • Understand relational and NoSQLs data storage options which include RDS, DynamoDB, Aurora and their use cases
      • Aurora has been added to the exam and most of time the questions refer to Aurora given its abilities for multiple read replicas and replication of data across AZs
      • Understand S3 is not a storage option for database
      • Understand RDS features – Read Replicas for scalability, Multi-AZ for High Availability, Automated Backups, underlying volume types
      • Understand DynamoDB with its low latency performance, DAX
      • Understand DynamoDB provisioned throughput for Read/Writes
      • Know ElastiCache use cases, mainly for caching performance
    • Analytics
      • Not much in deep, but understand what the services are and what they can do
      • Understand Redshift as a business intelligence tool
      • Know Kinesis for real time data capture and analytics
      • Atleast know what AWS Glue does, so you can eliminate the answer
    • Security
      • Understand IAM as a whole
      • Focus on IAM role and its use case especially with EC2 instance
      • Understand IAM identity providers and federation and use cases
      • Understand MFA and How would implement two factor authentication for your application
      • Understand encryption services
      • Refer Disaster Recovery whitepaper, be sure you know the different recovery types with impact on RTO/RPO.
    • Management Tools
      • Understand CloudWatch monitoring to provide operational transparency
      • Know which EC2 metrics it can track. Remember, it cannot track memory and disk space/swap utilization
      • Understand CloudWatch is extendable with custom metrics
      • Understand CloudTrail for Audit
      • Have a basic understanding of CloudFormation, OpsWorks
    • Integration Tools
      • Understand SQS as message queuing service and SNS as pub/sub notification service
      • Understand SQS features like visibility, long poll vs short poll
      • Focus on SQS as a decoupling service
      • AWS has released SQS FIFO, make sure you know the differences between standard and FIFO

NOTE: I have just marked the topics inline with the AWS Exam Blue Print. So be sure to check the same, as it is updated regularly and go through Whitepapers, FAQs and Re-Invent videos.

AWS Solutions Architect – Associate (Feb 2018) Exam Resources

AWS Cloud Computing Whitepapers

AWS Solutions Architect – Associate Exam Contents

Domain 1: Design Resilient Architectures

  1. Choose reliable/resilient storage.
  2. Determine how to design decoupling mechanisms using AWS services.
  3. Determine how to design a multi-tier architecture solution.
  4. Determine how to design high availability and/or fault tolerant architectures.

Domain 2: Define Performant Architectures

  1. Choose performant storage and databases.
  2. Apply caching to improve performance.
  3. Design solutions for elasticity and scalability.

Domain 3: Specify Secure Applications and Architectures

  1. Determine how to secure application tiers.
  2. Determine how to secure data.
  3. Define the networking infrastructure for a single VPC application.

Domain 4: Design Cost-Optimized Architectures

  1. Determine how to design cost-optimized storage.
  2. Determine how to design cost-optimized compute.

Domain 5: Define Operationally-Excellent Architectures

  1. Choose design features in solutions that enable operational excellence.

AWS ELB Network Load Balancer – Certification

AWS ELB Network Load Balancer

  • Network Load Balancer operates at the connection level (Layer 4), routing connections to targets – EC2 instances, containers and IP addresses based on IP protocol data.
  • Network Load Balancer is suited for load balancing of TCP traffic
  • Network Load Balancer is capable of handling millions of requests per second while maintaining ultra-low latencies.
  • Network Load Balancer is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone.
  • NLB is integrated with other AWS services such as Auto Scaling, EC2 Container Service (ECS), and CloudFormation.

Network Load Balancer Features

Connection-based Load Balancing

  • Allows load balancing of  TCP traffic, routing connections to targets – EC2 instances, microservices and containers, and IP addresses.

High Availability

  • is highly available.
  • accepts incoming traffic from clients and distributes this traffic across the targets within the same Availability Zone.
  • monitors the health of its registered targets and routes the traffic only to healthy targets
  • if a health check fails and an unhealthy target is detected, it stops routing traffic to that target and reroutes traffic to remaining healthy targets.
  • if configured with multiple AZs and if all the targets in a single AZ fail, it routes traffic to healthy targets in the other AZs

High Throughput

  • is designed to handle traffic as it grows and can load balance millions of requests/sec.
  • can also handle sudden volatile traffic patterns.

Low Latency

  • offers extremely low latencies for latency-sensitive applications.

Preserve source IP address

  • preserves client side source IP allowing the back-end to see client IP address

Static IP support

  • automatically provides a static IP per Availability Zone (subnet) that can be used by applications as the front-end IP of the load balancer.

Elastic IP support

  • an Elastic IP per Availability Zone (subnet) can also be assigned, optionally, thereby providing a fixed IP.

Health Checks

  • supports both network and application target health checks.
  • Network-level health check
    • is based on the overall response of the underlying target (instance or a container) to normal traffic.
    • target is marked unavailable if it is slow or unable to respond to new connection requests
  • Application-level health check
    • is based on a specific URL on a given target to test the application health deeper

DNS Fail-over

  • integrates with Route 53
  • Route 53 will direct traffic to load balancer nodes in other AZs, if there are no healthy targets with NLB or if the NLB itself is unhealthy
  • if NLB is unresponsive, Route 53 will remove the unavailable load balancer IP address from service and direct traffic to an alternate Network Load Balancer in another region.

Integration with AWS Services

  • is integrated with other AWS services such as Auto Scaling, EC2 Container Service (ECS), CloudFormation, CodeDeploy, and AWS Config.

Long-lived TCP Connections

  • supports long-lived TCP connections ideal for WebSocket type of applications

Central API Support

  • uses the same API as Application Load Balancer.
  • enables you to work with target groups, health checks, and load balance across multiple ports on the same EC2 instance to support containerized applications.

Robust Monitoring and Auditing

  • integrated with CloudWatch to report Network Load Balancer metrics.
  • CloudWatch provides metrics such as Active Flow count, Healthy Host Count, New Flow Count, Processed bytes, and more.
  • integrated with CloudTrail to track API calls to the NLB

Enhanced Logging

  • use the Flow Logs feature to record all requests sent to the load balancer.
  • Flow Logs capture information about the IP traffic going to and from network interfaces in the VPC
  • Flow log data is stored using CloudWatch Logs

Zonal Isolation

  • is designed for application architectures in a single zone.
  • can be enabled in a single AZ to support architectures that require zonal isolation
  • automatically fails-over to other healthy AZs, if something fails in a AZ
  • its recommended to configure the load balancer and targets in multiple AZs for achieving high availability

Load Balancing using IP addresses as Targets

  • allows load balancing of any application hosted in AWS or on-premises using IP addresses of the application backends as targets.
  • allows load balancing to an application backend hosted on any IP address and any interface on an instance.
  • ability to load balance across AWS and on-premises resources helps migrate-to-cloud, burst-to-cloud or failover-to-cloud
  • applications hosted in on-premises locations can be used as targets over a Direct Connect connection and EC2-Classic (using ClassicLink).

Advantages over Classic Load Balancer

  • Ability to handle volatile workloads and scale to millions of requests per second, without the need of pre-warming
  • Support for static IP/Elastic IP addresses for the load balancer
  • Support for registering targets by IP address, including targets outside the VPC (on-premises) for the load balancer.
  • Support for routing requests to multiple applications on a single EC2 instance. Single instance or IP address can be registered with the same target group using multiple ports.
  • Support for containerized applications. Using Dynamic port mapping, ECS can select an unused port when scheduling a task and register the task with a target group using this port.
  • Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables scaling each service dynamically based on demand

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

References

ELB_Network_Load_Balancer

AWS Organizations – Certification

AWS Organizations

  • AWS Organizations is an account management service that enables consolidating multiple AWS accounts into an organization that can be created and centrally managed.
  • AWS Organizations includes consolidated billing and account management capabilities that enable one to better meet the budgetary, security, and compliance needs of your business.
  • As an administrator of an organization, new accounts can be created in an organization and invite existing accounts to join the organization.
  • AWS Organizations enables you to
    • Centrally manage policies across multiple AWS accounts
    • Control access to AWS services
    • Automate AWS account creation and management
    • Consolidate billing across multiple AWS accounts

AWS Organizations

AWS Organization Features

Centralized management of all of your AWS accounts

  • Combine existing accounts into or create new ones within an organization that enables them to be managed centrally
  • Policies can be attached to accounts that affect some or all of the accounts

Consolidated billing for all member accounts

  • Consolidated billing is a feature of AWS Organizations.
  • Master account of the organization can be used to consolidate and pay for all member accounts.

Hierarchical grouping of accounts to meet budgetary, security, or compliance needs

  • Accounts can be grouped into organizational units (OUs) and each OU can be attached different access policies.
  • OUs can also be nested to a depth of five levels, providing flexibility in how you structure your account groups.

Control over AWS services and API actions that each account can access

  • As an administrator of the master account of an organization, access to users and roles in each member account can be restricted to which AWS services and individual API actions
  • Organization permissions overrule account permissions.
  • This restriction even overrides the administrators of member accounts in the organization.
  • When AWS Organizations blocks access to a service or API action for a member account, a user or role in that account can’t access any prohibited service or API action, even if an administrator of a member account explicitly grants such permissions in an IAM policy.

Integration and support for AWS IAM

  • IAM provides granular control over users and roles in individual accounts.
  • AWS Organizations expands that control to account level by giving control over what users and roles in an account or a group of accounts can do
  • User can access only what is allowed by both the AWS Organizations policies and IAM policies.
  • Resulting permissions are the logical intersection of what is allowed by AWS Organizations at the account level, and what permissions are explicitly granted by IAM at the user or role level within that account.
  • If either blocks an operation, the user can’t access that operation.

Integration with other AWS services

  • Select AWS services can be enabled to access accounts in the organization and perform actions on the resources in the accounts.
  • When another service is configured and authorized to access with the organization, AWS Organizations creates an IAM service-linked role for that service in each member account.
  • Service-linked role has predefined IAM permissions that allow the other AWS service to perform specific tasks in the organization and its accounts
  • All accounts in an organization automatically have a service-linked role created, which enables the AWS Organizations service to create the service-linked roles required by AWS services for which you enable trusted access
  • These additional service-linked roles come with policies that enable the specified service to perform only those required tasks

Data replication that is eventually consistent

    • AWS Organizations is eventually consistent.
    • AWS Organizations achieves high availability by replicating data across multiple servers in AWS data centers within its region.
    • If a request to change some data is successful, the change is committed and safely stored.
    • However, the change must then be replicated across the multiple servers.

AWS Organizations Terminology and Concepts

AWS Organizations Concepts

Organization

  • An entity created to consolidate AWS accounts.
  • An organization has one master account along with zero or more member accounts.
  • An organization has the functionality that is determined by the feature set that you enable i.e. All features or Consolidated Billing only

Root

  • Parent container for all the accounts for the organization.
  • Policy applied to the root is applied to all the organizational units (OUs) and accounts in the organization.
  • There can be only one root currently and AWS Organization automatically creates it when an organization is created

Organization unit (OU)

  • A container for accounts within a root.
  • An OU also can contain other OUs, enabling hierarchy creation that resembles an upside-down tree, with a root at the top and branches of OUs that reach down, ending in accounts that are the leaves of the tree.
  • A policy attached to one of the nodes in the hierarchy, flows down and affects all branches (OUs) and leaves (accounts) beneath it
  • An OU can have exactly one parent, and currently each account can be a member of exactly one OU.

Account

  • A standard AWS account that contains AWS resources.
  • Each account can be directly in the root, or placed in one of the OUs in the hierarchy.
  • Policy can be attached to an account to apply controls to only that one account.
  • Accounts can be organized in a hierarchical, tree-like structure with a root at the top and organizational units nested under the root.
  • Master account
    • Primary account which creates the organization
    • can create new accounts in the organization, invite existing accounts, remove accounts, manage invitations, apply policies to entities within the organization.
    • has the responsibilities of a payer account and is responsible for paying all charges that are accrued by the member accounts.
  • Member account
    • Rest of the accounts within the organization are member accounts.
    • An account can be a member of only one organization at a time.

Invitation

  • Process of asking another account to join an organization.
  • An invitation can be issued only by the organization’s master account and is extended to either the account ID or the email address that is associated with the invited account.
  • Invited account becomes a member account in the organization, after it accepts the invitation.
  • Invitations can be sent to existing member accounts as well, to approve the change from supporting only consolidated billing feature to supporting all features
  • Invitations work by accounts exchanging handshakes.

Handshake

  • A multi-step process of exchanging information between two parties
  • Primary use in AWS Organizations is to serve as the underlying implementation for invitations.
  • Handshake messages are passed between and responded to by the handshake initiator (master account) and the recipient (member account) in such a way that it ensures that both parties always know what the current status is.

Available feature sets

Consolidated billing

  • provides shared billing functionality

All features

  • includes all the functionality of consolidated billing,
  • includes advanced features that gives more control over accounts in the organization.
  • allows master account to have full control over what member accounts can do
  • master account can apply SCPs to restrict the services and actions that users (including the root user) and roles in an account can access, and it can prevent member accounts from leaving the organization.

Service control policy (SCP)

  • Service control policy specifies the services and actions that users and roles can use in the accounts that the SCP affects.
  • SCPs are similar to IAM permission policies except that they don’t grant any permissions.
  • SCPs are filters that allow only the specified services and actions to be used in affected accounts.
  • SCPs override IAM permission policy. So even if a user is granted full administrator permissions with an IAM permission policy, any access that is not explicitly allowed or that is explicitly denied by the SCPs affecting that account is blocked.
  • For e.g., if you assign an SCP that allows only database service access to your “database” account, then any user, group, or role in that account is denied access to any other service’s operations.
  • SCP can be attached to
    • A root, which affects all accounts in the organization
    • An OU, which affects all accounts in that OU and all accounts in any OUs in that OU subtree
    • An individual account
  • Master account of the organization is not affected by any SCPs that are attached either to it or to any root or OU the master account might be in.

Whitelisting vs. blacklisting

Whitelisting and blacklisting are complementary techniques used to apply SCPs to filter the permissions available to accounts.

Whitelisting

  • Explicitly specify the access that is allowed.
  • All other access is implicitly blocked.
  • By default, all permissions are whitelisted.
  • AWS Organizations attaches an AWS managed policy called FullAWSAccess to all roots, OUs, and accounts, which ensures building of the organizations.
  • For restricting permissions, replace the FullAWSAccess policy with one that allows only the more limited, desired set of permissions.
  • Users and roles in the affected accounts can then exercise only that level of access, even if their IAM policies allow all actions.
  • If you replace the default policy on the root, all accounts in the organization are affected by the restrictions.
  • You can’t add them back at a lower level in the hierarchy because an SCP never grants permissions; it only filters them.

Blacklisting

  • Default behavior of AWS Organizations.
  • Explicitly specify the access that is not allowed.
  • Explicit deny of a service action overrides any allow of that action.
  • All other permissions are allowed unless explicitly blocked
  • By default, AWS Organizations attaches an AWS managed policy called FullAWSAccess to all roots, OUs, and accounts. This allows any account to access any service or operation with no AWS Organizations–imposed restrictions.
  • With blacklisting, additional policies are attached that explicitly deny access to the unwanted services and actions

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. An organization that is currently using consolidated billing has recently acquired another company that already has a number of AWS accounts. How could an Administrator ensure that all AWS accounts, from both the existing company and the acquired company, are billed to a single account?
    1. Merge the two companies, AWS accounts by going to the AWS console and selecting the “Merge accounts” option.
    2. Invite the acquired company’s AWS account to join the existing company’s organization using AWS Organizations.
    3. Migrate all AWS resources from the acquired company’s AWS account to the master payer account of the existing company.
    4. Create a new AWS account and set it up as the master payer. Move the AWS resources from both the existing and acquired companies’ AWS accounts to the new account.
  2. Which of the following are the benefits of AWS Organizations? Choose the 2 correct answers:
    1. Centrally manage access polices across multiple AWS accounts.
    2. Automate AWS account creation and management.
    3. Analyze cost across all multiple AWS accounts.
    4. Provide technical help (by AWS) for issues in your AWS account.
  3. A company has several departments with separate AWS accounts. Which feature would allow the company to enable consolidate billing?
    1. AWS Inspector
    2. AWS Shield
    3. AWS Organizations
    4. AWS Lightsail

References

 

 

 

AWS Auto Scaling Lifecycle – Certification

Auto Scaling Lifecycle

  • Instances launched through Auto Scaling group have a a different lifecycle then that of other EC2 instances
  • Auto Scaling lifecycle starts when the Auto Scaling group launches an instance and puts it into service.
  • Auto Scaling lifecycle ends when the instance is terminated either by the user , or the Auto Scaling group takes it out of service and terminates it
  • AWS charges for the instances as soon as they are launched, including the time it is not in InService

Auto Scaling Lifecycle Transition

Auto Scaling Group Lifecycle

Auto Scaling Lifecycle Hooks

  • Auto Scaling Lifecycle hooks enable you to perform custom actions by pausing instances as an Auto Scaling group launches or terminates them
  • Each Auto Scaling group can have multiple lifecycle hooks. However, there is a limit on the number of hooks per Auto Scaling group
  • Auto Scaling scale out event flow
    • Instances start in the Pending state
    • If an autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook is added, the state is moved to Pending:Wait
    • After the lifecycle action is completed, instances enter to Pending:Proceed
    • When the instances are fully configured, they are attached to the Auto Scaling group and moved to the InService state
  • Auto Scaling scale in event flow
    • Instances are detached from the Auto Scaling group and enter the Terminating state.
    • If an autoscaling:EC2_INSTANCE_TERMINATING lifecycle hook is added, the state is moved to Terminating:Wait
    • After the lifecycle action is completed, the instances enter the Terminating:Proceed state.
    • When the instances are fully terminated, they enter the Terminated state.
  • During the scale out and scale in events, instances are put into a wait state  (Pending:Wait or Terminating:Wait) and is paused until either a continue action happens or the timeout period ends.
  • By default, the instance remains in a wait state for one hour, which can be extended by restarting the timeout period by recoding a heartbeat. If the task finishes before the timeout period ends, the lifecycle action can be marked completed and it continues the launch or termination process.
  • After the wait period the Auto Scaling group continues the launch or terminate process (Pending:Proceed or Terminating:Proceed)
    • CloudWatch Events target to invoke a Lambda function when a lifecycle action occurs. Event contains information about the instance that is launching or terminating, and a token that can be used to control the lifecycle action.
    • Notification target (CloudWatch events, SNS, SQS) for the lifecycle hook which receives the message from EC2 Auto Scaling.The message contains information about the instance that is launching or terminating, and a token that you can use to control the lifecycle action.
    • Create a script that runs on the instance as the instance starts. The script can control the lifecycle action using the ID of the instance on which it runs.Custom action can be implemented using

Auto Scaling Lifecycle Hooks Considerations

  • Keeping Instances in a Wait State
    • Instances remain in a wait state for a finite period of time. Default being 1 hour (3600 seconds) with max being 48 hours or 100 times the heartbeat timeout, whichever is smaller.
    • Time can be adjusted using
      • complete-lifecycle-action (CompleteLifecycleAction) command to continue to the next state if finishes before the timeout period ends
      • put-lifecycle-hook command, the –heartbeat-timeout parameter to set the heartbeat timeout for the lifecycle hook during its creation
      • Restart the timeout period by recording a heartbeat, using the record-lifecycle-action-heartbeat (RecordLifecycleActionHeartbeat) command
  • Cooldowns and Custom Actions
    • Cooldown period helps ensure that the Auto Scaling group does not launch or terminate more instances than needed
    • Cooldown period starts when the instance enters the InService state. Any suspended scaling actions resume after cooldown period expires
  • Health Check Grace Period
    • Health check grace period does not start until the lifecycle hook completes and the instance enters the InService state
  • Lifecycle Action Result
    • Result of lifecycle hook is either ABANDON or CONTINUE
    • If the instance is launching,
      • CONTINUE indicates a successful action, and the instance can be put into service.
      • ABANDON indicates the custom actions were unsuccessful, and that the instance can be terminated.
    • If the instance is terminating,
      • ABANDON and CONTINUE allow the instance to terminate.
      • However, ABANDON stops any remaining actions from other lifecycle hooks, while CONTINUE allows them to complete
  • Spot Instances
    • Lifecycle hooks can be used with Spot Instances. However, a lifecycle hook does not prevent an instance from terminating due to a change in the Spot Price, which can happen at any time

Enter and Exit Standby

  • Instance in an InService state can be moved toStandby state.
  • Standby state enables you to remove the instance from service, troubleshoot or make changes to it, and then put it back into service.
  • Instances in a Standby state continue to be managed by the Auto Scaling group. However, they are not an active part of the application until they are put back into service.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your application is running on EC2 in an Auto Scaling group. Bootstrapping is taking 20 minutes to complete. You find out that instances are shown as InService although the bootstrapping has not completed. How can you make sure that new instances are not added until the bootstrapping has finished. Choose the correct answer:
    1. Create a CloudWatch alarm with an SNS topic to send alarms to your DevOps engineer.
    2. Create a lifecycle hook to keep the instance in pending:wait state until the bootstrapping has finished and then put the instance in pending:proceed state.
    3. Increase the number of instances in your Auto Scaling group.
    4. Create a lifecycle hook to keep the instance in standby state until the bootstrapping has finished and then put the instance in pending:proceed state.
  2. When a scale out event occurs, the Auto Scaling group launches the required number of EC2 instances using its assigned launch configuration. What instance state do these instances start in? Choose the correct answer:
    1. pending:wait
    2. InService
    3. Pending
    4. Terminating
  3. With AWS Auto Scaling, once we apply a hook and the action is complete or the default wait state timeout runs out, the state changes to what, depending on which hook we have applied and what the instance is doing? Select two. Choose the 2 correct answers:
    1. pending:proceed
    2. pending:wait
    3. terminating:wait
    4. terminating:proceed
  4. For AWS Auto Scaling, what is the first transition state an existing instance enters after leaving steady state in Standby mode?
    1. Detaching
    2. Terminating:Wait
    3. Pending (You can put any instance that is in an InService state into a Standby state. This enables you to remove the instance from service, troubleshoot or make changes to it, and then put it back into service. Instances in a Standby state continue to be managed by the Auto Scaling group. However, they are not an active part of your application until you put them back into service. Refer link)
    4. EnteringStandby
  5. For AWS Auto Scaling, what is the first transition state an instance enters after leaving steady state when scaling in due to health check failure or decreased load?
    1. Terminating (When Auto Scaling responds to a scale in event, it terminates one or more instances. These instances are detached from the Auto Scaling group and enter the Terminating state. Refer link)
    2. Detaching
    3. Terminating:Wait
    4. EnteringStandby

References

AutoScalingGroupLifecycle

 

AWS CloudWatch Logs – Certification

AWS CloudWatch Logs

  • CloudWatch Logs can be used to monitor, store, and access log files from  EC2 instances, CloudTrail, Route 53, and other sources
  • CloudWatch Logs uses the log data for monitoring in an not; so, no code changes are required
  • CloudWatch Logs require CloudWatch logs agent to be installed on the EC2 instances and on-premises servers.
  • CloudWatch Logs agent makes it easy to quickly send both rotated and non-rotated log data off of a host and into the log service.
  • An VPC endpoint can be configured to keep traffic between VPC and CloudWatch Logs from leaving the Amazon network. It doesn’t require an IGW, NAT, VPN connection, or Direct Connect connection
  • CloudWatch Logs allows exporting log data from the log groups to an S3 bucket, which can then be used for custom processing and analysis, or to load onto other systems.
  • Log data is encrypted while in transit and while it is at rest
  • Log data can be encrypted using an AWS KMS or customer master key (CMK).

Required Mainly for SysOps Associate & DevOps Professional Exam

CloudWatch Logs Concepts

Log Events

  • A log event is a record of some activity recorded by the application or resource being monitored.
  • Log event record contains two properties: the timestamp of when the event occurred, and the raw event message

Log Streams

  • A log stream is a sequence of log events that share the same source for e.g. logs events from an Apache access log on a specific host.

Log Groups

  • Log groups define groups of log streams that share the same retention, monitoring, and access control settings for e.g. Apache access logs from each host grouped through log streams into a single log group
  • Each log stream has to belong to one log group
  • There is no limit on the number of log streams that can belong to one log group.

Metric Filters

  • Metric filters can be used to extract metric observations from ingested events and transform them to data points in a CloudWatch metric.
  • Metric filters are assigned to log groups, and all of the filters assigned to a log group are applied to their log streams.

Retention Settings

  • Retention settings can be used to specify how long log events are kept in CloudWatch Logs.
  • Expired log events get deleted automatically.
  • Retention settings are assigned to log groups, and the retention assigned to a log group is applied to their log streams.

CloudWatch Logs Use cases

Monitor Logs from EC2 Instances in Real-time

  • can help monitor applications and systems using log data
  • can help track number of errors for e.g. 404, 500, for even specific literal terms “NullReferenceException”, occurring in the applications, which can then be matched to a threshold to send notification

Monitor AWS CloudTrail Logged Events

  • can be used to monitor particular API activity as captured by CloudTrail by creating alarms in CloudWatch and receive notifications

Archive Log Data

  • can help store the log data in highly durable storage, an alternative to S3
  • log retention setting can be modified, so that any log events older than this setting are automatically deleted.

Log Route 53 DNS Queries

  • can help log information about the DNS queries that Route 53 receives.

Real-time Processing of Log Data with Subscriptions

  • Subscriptions can help get access to real-time feed of logs events from CloudWatch logs and have it delivered to other services such as Kinesis stream, Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems
  • A subscription filter defines the filter pattern to use for filtering which log events get delivered to the AWS resource, as well as information about where to send matching log events to.
  • CloudWatch Logs log group can also be configured to stream data Elasticsearch Service cluster in near real-time

Searching and Filtering

  • CloudWatch Logs allows searching and filtering the log data by creating one or more metric filters.
  • Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs.
  • CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that can be put as graph or set an alarm on.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Once we have our logs in CloudWatch, we can do a number of things such as: Choose 3. Choose the 3 correct answers:[CDOP]
    1. Send the log data to AWS Lambda for custom processing or to load into other systems
    2. Stream the log data to Amazon Kinesis
    3. Stream the log data into Amazon Elasticsearch in near real-time with CloudWatch Logs subscriptions.
    4. Record API calls for your AWS account and delivers log files containing API calls to your Amazon S3 bucket
  2. You have decided to set the threshold for errors on your application to a certain number and once that threshold is reached you need to alert the Senior DevOps engineer. What is the best way to do this? Choose 3. Choose the 3 correct answers: [CDOP]
    1. Set the threshold your application can tolerate in a CloudWatch Logs group and link a CloudWatch alarm on that threshold.
    2. Use CloudWatch Logs agent to send log data from the app to CloudWatch Logs from Amazon EC2 instances
    3. Pipe data from EC2 to the application logs using AWS Data Pipeline and CloudWatch
    4. Once a CloudWatch alarm is triggered, use SNS to notify the Senior DevOps Engineer.
  3. You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO? [CDOP]
    1. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues. (is not fast in search and introduces delay)
    2. Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc MapReduce analysis and write new queries when needed. (is not fast in search and introduces delay)
    3. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues. (is not fast in search and introduces delay)
    4. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (ELK – Elasticsearch, Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)
  4. You use Amazon CloudWatch as your primary monitoring system for your web application. After a recent software deployment, your users are getting Intermittent 500 Internal Server Errors when using the web application. You want to create a CloudWatch alarm, and notify an on-call engineer when these occur. How can you accomplish this using AWS services? (Choose three.) [CDOP]
    1. Deploy your web application as an AWS Elastic Beanstalk application. Use the default Elastic Beanstalk CloudWatch metrics to capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric.
    2. Install a CloudWatch Logs Agent on your servers to stream web application logs to CloudWatch.
    3. Use Amazon Simple Email Service to notify an on-call engineer when a CloudWatch alarm is triggered.
    4. Create a CloudWatch Logs group and define metric filters that capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric.
    5. Use Amazon Simple Notification Service to notify an on-call engineer when a CloudWatch alarm is triggered.
    6. Use AWS Data Pipeline to stream web application logs from your servers to CloudWatch.

References

AWS_CloudWatch_Logs_User_Guide