AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Learning Path

AWS SysOps Administor - Associate SOA-C02 Certification

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Learning Path

  • I recently recertified for the AWS Certified SysOps Administrator – Associate (SOA-C02) exam.
  • SOA-C02 is the updated version of the SOA-C01 AWS exam with hands-on labs included, which is the first with AWS.

NOTE: As of March 28, 2023, the AWS Certified SysOps Administrator – Associate exam will not include exam labs until further notice. This removal of exam labs is temporary while we evaluate the exam labs and make improvements to provide an optimal candidate experience.

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Content

  • AWS SysOps Administrator – Associate SOA-C02 is intended for system administrators in a cloud operations role.
  • SOA-C02 validates a candidate’s ability to deploy, manage, and operate workloads on AWS which includes
    • Deploy, manage, and operate workloads on AWS
    • Support and maintain AWS workloads according to the AWS Well-Architected Framework
    • Perform operations by using the AWS Management Console and the AWS CLI
    • Implement security controls to meet compliance requirements
    • Monitor, log, and troubleshoot systems
    • Apply networking concepts (for example, DNS, TCP/IP, firewalls)
    • Implement architectural requirements (for example, high availability, performance, capacity)
    • Perform business continuity and disaster recovery procedures
    • Identify, classify, and remediate incidents

Refer AWS Certified SysOps – Associate (SOA-C02) Exam Guide

SOA-C02 Exam Domains

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Summary

  • SOA-C02 is the first AWS exam that included 2 sections
    • Objective questions
    • Hands-on labs
  • With Labs
    • SOA-C02 Exam consists of around 50 objective-type questions and 3 Hands-on labs to be answered in 190 minutes.
    • Labs are performed in a separate instance. Copy-paste works, so make sure you copy the exact names on resource creation.
    • Labs are pretty easy if you have worked on AWS.
    • Plan to leave 20 minutes to complete each exam lab.
    • NOTE: Once you complete a section and click next you cannot go back to the section. The same is for the labs. Once a lab is completed, you cannot return back to the lab.
    • Practice the Sample Lab provided when you book the exam, which would give you a feel of how the hands-on exam would actually be.
  • Without Labs
    • SOA-C02 exam consists of 65 questions in 130 minutes, and the time is more than sufficient if you are well-prepared.
  • SOA-C02 exam includes two types of questions, multiple-choice and multiple-response.
  • SOA-C02 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 720.
  • Associate exams currently cost $ 150 + tax.
  • You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
  • AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
  • Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Resources

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Topics

SOA-C02 mainly focuses on SysOps and DevOps tools in AWS and the ability to deploy, manage, operate, and automate workloads on AWS.

Management & Governance Tools

  • CloudFormation
    • provides an easy way to create and manage a collection of related AWS resources, provision and update them in an orderly and predictable fashion.
    • CloudFormation Concepts cover
      • Templates act as a blueprint for provisioning of AWS resources
      • Stacks are collection of resources as a single unit, that can be created, updated, and deleted by creating, updating, and deleting stacks.
      • Change Sets present a summary or preview of the proposed changes that CloudFormation will make when a stack is updated.
      • Nested stacks are stacks created as part of other stacks.
    • CloudFormation template anatomy consists of resources, parameters, outputs, and mappings.
    • CloudFormation supports multiple features
      • Drift detection enables you to detect whether a stack’s actual configuration differs, or has drifted, from its expected configuration.
      • Termination protection helps prevent a stack from being accidentally deleted.
      • Stack policy can prevent stack resources from being unintentionally updated or deleted during a stack update.
      • StackSets help create, update, or delete stacks across multiple accounts and Regions with a single operation.
      • Helper scripts with creation policies can help wait for the completion of events before provisioning or marking resources complete.
      • DependsOn attribute can specify the resource creation order and control the creation of a specific resource follows another.
      • Update policy supports rolling and replacing updates with AutoScaling.
      • Deletion policies to help retain or backup resources during stack deletion.
      • Custom resources can be configured for uses cases not supported for e.g. retrieve AMI IDs or interact with external services
    • Understand CloudFormation Best Practices esp. Nested Stacks and logical grouping
  • Elastic Beanstalk helps to quickly deploy and manage applications in the AWS Cloud without having to worry about the infrastructure that runs those applications. 
  • OpsWorks is a configuration management service that helps to configure and operate applications in a cloud enterprise by using Chef.
  • Understand CloudFormation vs Elastic Beanstalk vs OpsWorks
  • AWS Organizations
    • Difference between Service Control Policies and IAM Policies
    • SCP provides the maximum permission that a user can have, however, the user still needs to be explicitly given IAM policy.
    • Consolidated billing enables consolidating payments from multiple AWS accounts and includes combined usage and volume discounts including sharing of Reserved Instances across accounts.
  • Systems Manager is the operations hub and provides various services like parameter store, patch manager
    • Parameter Store provides secure, scalable, centralized, hierarchical storage for configuration data and secret management. Does not support secrets rotation. Use Secrets Manager instead
    • Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
    • Patch Manager helps automate the process of patching managed instances with both security-related and other types of updates.
  • CloudWatch
    • collects monitoring and operational data in the form of logs, metrics, and events, and visualizes it.
      • EC2 metrics can track (disk, network, CPU, status checks) but do not capture metrics like memory, disk swap, disk storage, etc.
      • CloudWatch unified agent can be used to gather custom metrics like memory, disk swap, disk storage, etc.
      • CloudWatch Alarm actions can be configured to perform actions based on various metrics for e.g. CPU below 5%
      • CloudWatch alarm can monitor StatusCheckFailed_System status on an EC2 instance and automatically recover the instance if it becomes impaired due to an underlying hardware failure or a problem that requires AWS involvement to repair.
      • Know ELB monitoring
        • Load Balancer metrics SurgeQueueLength and SpilloverCount
        • HealthyHostCount, UnHealthyHostCount determines the number of healthy and unhealthy instances registered with the load balancer.
        • Reasons for 4XX and 5XX errors
    • CloudWatch logs can be used to monitor, store, and access log files from EC2 instances, CloudTrail, Route 53, and other sources. You can create metric filters over the logs.
    • CloudWatch Subscription Filters can be used to send logs to Kinesis Data Streams, Lambda, or Kinesis Data Firehose.
    • EventBridge (CloudWatch Events) is a serverless event bus service that makes it easy to connect applications with data from a variety of sources.
    • EventBridge or CloudWatch events can be used as a trigger for periodically scheduled events.
    • CloudWatch unified agent helps collect metrics and logs from EC2 instances and on-premises servers and push them to CloudWatch.
  • CloudTrail for audit and governance
    • With Organizations, the trail can be configured to log CloudTrail from all accounts to a central account.
    • CloudTrail log file integrity validation can be used to check whether a log file was modified, deleted, or unchanged after being delivered.
  • AWS Config is a fully managed service that provides AWS resource inventory, configuration history, and configuration change notifications to enable security, compliance, and governance.
    • supports managed as well as custom rules that can be evaluated on periodic basis or as the event occurs for compliance and trigger automatic remediation
    • Conformance pack is a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an account and a Region or across an organization in AWS Organizations.
  • Control Tower
    • to setup, govern, and secure a multi-account environment
    • strongly recommended guardrails cover EBS encryption
  • Service Catalog
    • allows organizations to create and manage catalogues of IT services that are approved for use on AWS with minimal permissions.
  • Trusted Advisor provides recommendations that help follow AWS best practices covering security, performance, cost, fault tolerance & service limits.
  • AWS Health Dashboard is the single place to learn about the availability and operations of AWS services.
  • Cost allocation tags can be used to differentiate resource costs and analyzed using Cost Explorer or on a Cost Allocation report.
  • Understand how to setup Billing Alerts using CloudWatch

Networking & Content Delivery

  • VPC – Virtual Private Cloud is a virtual network in AWS
    • Understand Public Subnet (has access to the Internet) vs Private Subnet (no access to the Internet)
    • Route table defines rules, termed as routes, which determine where network traffic from the subnet would be routed
    • Internet Gateway enables access to the internet
    • Bastion host – allow access to instances in the private subnet without directly exposing them to the internet.
    • NAT helps route traffic from private subnets to the internet
    • NAT instance vs NAT Gateway
    • Virtual Private Gateway – Connectivity between on-premises and VPC
    • Egress-Only Internet Gateway – relevant to IPv6 only to allow egress traffic from private subnet to internet, without allowing ingress traffic
    • VPC Flow Logs enables you to capture information about the IP traffic going to and from network interfaces in the VPC and can help in monitoring the traffic or troubleshooting any connectivity issues
    • Security Groups vs NACLs esp. Security Groups are stateful and NACLs are stateless.
    • VPC Peering provides a connection between two VPCs that enables routing of traffic between them using private IP addresses.
    • VPC Endpoints enables the creation of a private connection between VPC to supported AWS services and VPC endpoint services powered by PrivateLink using its private IP address
    • Ability to debug networking issues like EC2 not accessible, EC2 not reachable, or not able to communicate with others or Internet.
  • Route 53 provides a scalable DNS system
    • supports ALIAS record type helps map zone apex records to ELB, CloudFront, and S3 endpoints.
    • Understand Routing Policies and their use cases
      • Failover routing policy helps to configure active-passive failover.
      • Geolocation routing policy helps route traffic based on the location of the users.
      • Geoproximity routing policy helps route traffic based on the location of the resources and, optionally, shift traffic from resources in one location to resources in another.
      • Latency routing policy use with resources in multiple AWS Regions and you want to route traffic to the Region that provides the best latency with less round-trip time.
      • Weighted routing policy helps route traffic to multiple resources in specified proportions.
    • Focus on Weighted, Latency routing policies
  • Understand ELB, ALB, and NLB and what features they provide like
    • Understand keys differences ELB vs ALB vs NLB
    • ALB provides content and path routing
    • NLB provides the ability to give static IPs to the load balancer esp. if there is a requirement to whitelist IPs.
    • LB access logs provide the source IP address
    • supports Sticky sessions to enable the load balancer to bind a user’s session to a specific target.
  • Understand CloudFront and use cases
    • CloudFront can be used with S3 to expose static data and website
  • Know VPN and Direct Connect to provide AWS to on-premises connectivity. Not covered in detail.

Compute

  • Understand EC2 in depth
    • Understand EC2 instance types and use cases.
    • Understand EC2 purchase options esp. spot instances and improved reserved instances options.
    • Understand EC2 Metadata & Userdata.
    • Understand EC2 Security. 
      • Use IAM Role work with EC2 instances to access services
      • IAM Role can now be attached to stopped and runnings instances
    • AMIs provide the information required to launch an instance, which is a virtual server in the cloud.
      • AMIs are regional and can be shared publicly or with other accounts
      • Only AMIs with unencrypted volumes or encrypted with a CMK (customer-managed keys) can be shared.
      • The best practice is to use prebaked or golden images to reduce startup time for the applications. Leverage EC2 Image Builder.
    • Troubleshooting EC2 issues
      • RequestLimitExceeded
      • InstanceLimitExceeded – Concurrent running instance limit, default is 20, has been reached in a region. Request increase in limits.
      • InsufficientInstanceCapacity – AWS does not currently have enough available capacity to service the request. Change AZ or Instance Type.
    • Monitoring EC2 instances
      • System status checks failure – Stop and Start
      • Instance status checks failure – Reboot
    • EC2 supports Instance Recovery where the recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata.
    • EC2 Image Builder can be used to pre-baked images with software to speed up booting and launching time.
  • Understand Placement groups
    • Cluster Placement Group provide low latency, High-Performance Computing by the logical grouping of instances within a Single AZ
    • Spread Placement Groups is a group of instances that are each placed on distinct underlying hardware i.e. each instance on a distinct rack across AZ
    • Partition Placement Groups is a group of instances spread across partitions i.e. group of instances spread across racks across AZs
  • Understand Auto Scaling
    • Auto Scaling can be configured with multiple AZs for high availability to launch instances across multiple AZs
    • Auto Scaling attempts to distribute instances evenly between the AZs that are enabled for the Auto Scaling group
    • Auto Scaling supports
      • Dynamic scaling, which allows you to scale automatically in response to the changing demand
      • Schedule scaling, which allows you to scale the application in response to predictable load changes
      • Manual scaling can be performed by changing the desired capacity or adding and removing instances
    • Auto Scaling life cycle hooks can be used to perform activities before instance termination.
  • Understand Lambda and its use cases
    • Lambda functions can be hosted in VPC with internet access controlled by a NAT instance.
    • RDS Proxy acts as an intermediary between the application and an RDS database. RDS Proxy establishes and manages the necessary connection pools to the database so that the application creates fewer database connections.

Storage

  • S3 provides an object storage service
    • Understand storage classes with lifecycle policies
    • S3 data protection provides encryption at rest and encryption in transit
      • S3 default encryption can be used to encrypt the data with S3 bucket policies to prevent or reject unencrypted object uploads.
    • Multi-part handling for fault-tolerant and performant large file uploads
    • static website hosting, CORS
    • S3 Versioning can help recover from accidental deletes and overwrites.
    • Pre-Signed URLs for both upload and download
    • S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between the client and an S3 bucket using globally distributed edge locations in CloudFront.
  • Understand Glacier as archival storage. Glacier does not provide immediate access to the data even with expediated retrievals.
  • Understand EBS storage option
  • Storage Gateway allows storage of data in the AWS cloud for scalable and cost-effective storage while maintaining data security.
    •  Gateway-cached volumes stores data is stored in S3 and retains a copy of recently read data locally for low latency access to the frequently accessed data
    • Gateway-stored volumes maintain the entire data set locally to provide low latency access
  • EFS is a cost-optimized, serverless, scalable, and fully managed file storage for use with AWS Cloud and on-premises resources.
    • supports data at rest encryption only during the creation. After creation, the file system cannot be encrypted and must be copied over to a new encrypted disk.
    • supports General purpose and Max I/O performance mode.
    • If hitting PercentIOLimit issue move to Max I/O performance mode.
  • FSx makes it easy and cost-effective to launch, run, and scale feature-rich, high-performance file systems in the cloud
  • FSx for Windows supports SMB protocol and a Multi-AZ file system to provide high availability across multiple AZs.
  • AWS Backup can be used to automate backup for EC2 instances and EFS file systems
  • Data Lifecycle Manager to automate the creation, retention, and deletion of EBS snapshots and EBS-backed AMIs.
  • AWS DataSync automates moving data between on-premises storage and S3 or Elastic File System (EFS).

Databases

  • RDS provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks.
    • Understand RDS Multi-AZ vs Read Replicas and use cases
    • Multi-AZ deployment provides high availability, durability, and failover support
    • Read replicas enable increased scalability and database availability in the case of an AZ failure.
    • Automated backups and database change logs enable point-in-time recovery of the database during the backup retention period, up to the last five minutes of database usage.
  • Aurora is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine
    • Backtracking “rewinds” the DB cluster to the specified time and performs in-place restore and does not create a new instance.
    • Automated Backups that help restore the DB as a new instance
  • Know ElastiCache use cases, mainly for caching performance
    • Understand ElastiCache Redis vs Memcached
    • Redis provides Multi-AZ support helps provide high availability across AZs and Online resharding to dynamically scale.
    • ElastiCache can be used as a caching layer for RDS.
  • Know DynamoDB. Not covered in detail

Security

  • IAM provides Identity and Access Management services.
  • S3 Encryption supports data at rest and in transit encryption
    • Understand S3 with SSE, SSE-C, SSE-KMS
    • S3 default encryption can help encrypt objects, however, it does not encrypt existing objects before the setting was enabled. You can use S3 Inventory to list the objects and S3 Batch to encrypt them.
  • Understand KMS for key management and envelope encryption
    • KMS with imported customer key material does not support rotation and has to be done manually.
  • AWS WAF – Web Application Firewall helps protect the applications against common web exploits like XSS or SQL Injection and bots that may affect availability, compromise security, or consume excessive resources
  • AWS GuardDuty is a threat detection service that continuously monitors the AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.
  • AWS Secrets Manager can help securely expose credentials as well as rotate them.
    • Secrets Manager integrates with Lambda and supports credentials rotation
  • AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS
  • Amazon Inspector
    • is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.
    • automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
  • AWS Certificate Manager (ACM) handles the complexity of creating, storing, and renewing public and private SSL/TLS X.509 certificates and keys that protect the AWS websites and applications.
  • Know AWS Artifact as on-demand access to compliance reports

Analytics

  • Amazon Athena can be used to query S3 data without duplicating the data and using SQL queries
  • OpenSearch (Elasticsearch) service is a distributed search and analytics engine built on Apache Lucene.
    • Opensearch production setup would be 3 AZs, 3 dedicated master nodes, 6 nodes with two replicas in each AZ.

Integration Tools

  • Understand SQS as a message queuing service and SNS as pub/sub notification service
    • Focus on SQS as a decoupling service
    • Understand SQS FIFO, make sure you know the differences between standard and FIFO
  • Understand CloudWatch integration with SNS for notification

Practice Labs

  • Create IAM users, IAM roles with specific limited policies.
  • Create a private S3 bucket
    • enable versioning
    • enable default encryption
    • enable lifecycle policies to transition and expire the objects
    • enable same region replication
  • Create a public S3 bucket with static website hosting
  • Set up a VPC with public and private subnets with Routes, SGs, NACLs.
  • Set up a VPC with public and private subnets and enable communication from private subnets to the Internet using NAT gateway
  • Create EC2 instance, create a Snapshot and restore it as a new instance.
  • Set up Security Groups for ALB and Target Groups, and create ALB, Launch Template, Auto Scaling Group, and target groups with sample applications. Test the flow.
  • Create Multi-AZ RDS instance and instance force failover.
  • Set up SNS topic. Use Cloud Watch Metrics to create a CloudWatch alarm on specific thresholds and send notifications to the SNS topic
  • Set up SNS topic. Use Cloud Watch Logs to create a CloudWatch alarm on log patterns and send notifications to the SNS topic.
  • Update a CloudFormation template and re-run the stack and check the impact.
  • Use AWS Data Lifecycle Manager to define snapshot lifecycle.
  • Use AWS Backup to define EFS backup with hourly and daily backup rules.

AWS Certified SysOps Administrator – Associate (SOA-C02) Exam Day

  • Make sure you are relaxed and get some good night’s sleep. The exam is not tough if you are well-prepared.
  • If you are taking the AWS Online exam
    • Try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
    • The online verification process does take some time and usually, there are glitches.
    • Remember, you would not be allowed to take the take if you are late by more than 30 minutes.
    • Make sure you have your desk clear, no hand-watches, or external monitors, keep your phones away, and nobody can enter the room.

Finally, All the Best 🙂

 

AWS Certified Developer – Associate DVA-C02 Exam Learning Path

AWS Certified Developer - Associate Certification

AWS Certified Developer – Associate DVA-C02 Exam Learning Path

  • AWS Certified Developer – Associate DVA-C02 exam is the latest AWS exam released on 27th February 2023 and has replaced the previous AWS Developer – Associate DVA-C01 certification exam.
  • I passed the AWS Developer – Associate DVA-C02 exam with a score of 835/1000.

AWS Certified Developer – Associate DVA-C02 Exam Content

  • DVA-C02 validates a candidate’s ability to demonstrate proficiency in developing, testing, deploying, and debugging AWS cloud-based applications.
  • DVA-C02 also validates a candidate’s ability to complete the following tasks:
    • Develop and optimize applications on AWS
    • Package and deploy by using continuous integration and continuous delivery (CI/CD) workflows
    • Secure application code and data
    • Identify and resolve application issues

Refer AWS Certified Developer – Associate Exam Blue Print

AWS Certified Developer - Associate Domains

AWS Certified Developer – Associate DVA-C02 Summary

  • DVA-C02 exam consists of 65 questions in 130 minutes, and the time is more than sufficient if you are well-prepared.
  • DVA-C02 exam includes two types of questions, multiple-choice and multiple-response.
  • DVA-C02 has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 720.
  • Associate exams currently cost $ 150 + tax.
  • You can get an additional 30 minutes if English is your second language by requesting Exam Accommodations. It might not be needed for Associate exams but is helpful for Professional and Specialty ones.
  • AWS exams can be taken either remotely or online, I prefer to take them online as it provides a lot of flexibility. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
  • Also, if you are taking the AWS Online exam for the first time try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.

AWS Certified Developer – Associate DVA-C02 Exam Resources

AWS Certified Developer – Associate DVA-C02 Exam Topics

  • AWS DVA-C02 exam concepts cover solutions that fall within AWS Well-Architected framework to cover scalable, highly available, cost-effective, performant, and resilient pillars.
  • AWS Certified Developer – Associate DVA-C02 exam covers a lot of the latest AWS services like Amplify, X-Ray while focusing majorly on other services like Lambda, DynamoDB, Elastic Beanstalk, S3, EC2
  • AWS Certified Developer – Associate DVA-C02 exam is similar to DVA-C01 with more focus on the hands-on development and deployment concepts rather than just the architectural concepts.
  • If you had been preparing for the DVA-C01, DVA-C02 is pretty much similar except for the addition of some new services covering Amplify, X-Ray, etc.

Compute

  • Elastic Cloud Compute – EC2
  • Auto Scaling and ELB
    • Auto Scaling provides the ability to ensure a correct number of EC2 instances are always running to handle the load of the application
    • Elastic Load Balancer allows the incoming traffic to be distributed automatically across multiple healthy EC2 instances
  • Autoscaling & ELB
    • work together to provide High Availability and Scalability.
    • Span both ELB and Auto Scaling across Multi-AZs to provide High Availability
    • Do not span across regions. Use Route 53 or Global Accelerator to route traffic across regions.
  • Lambda and serverless architecture, its features, and use cases.
    • Lambda integrated with API Gateway to provide a serverless, highly scalable, cost-effective architecture.
    • Lambda execution role needs the required permissions to integrate with other AWS services.
    • Environment variables to keep functions configurable.
    • Lambda Layers provide a convenient way to package libraries and other dependencies that you can use with your Lambda functions.
    • Function versions can be used to manage the deployment of the functions.
    • Function Alias supports creating aliases, which are mutable, for each function version.
    • provides /tmp ephemeral scratch storage.
    • Integrates with X-Ray for distributed tracing.
    • Use RDS proxy for connection pooling.
  • Elastic Container Service – ECS with its ability to deploy containers and microservices architecture.
    • ECS role for tasks can be provided through taskRoleArn
    • ALB provides dynamic port mapping to allow multiple same tasks on the same node.
  • Elastic Kubernetes Service – EKS
    • managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers
    • ideal for migration of an existing workload on Kubernetes
  • Elastic Beanstalk
    • at a high level, what it provides, and its ability to get an application running quickly.
    • Deployment types with their advantages and disadvantages

Databases

Storage

  • Simple Storage Service – S3
    • S3 storage classes with lifecycle policies
      • Understand the difference between SA Standard vs SA IA vs SA IA One Zone in terms of cost and durability
    • S3 Data Protection
      • S3 Client-side encryption encrypts data before storing it in S3
      • S3 encryption in transit can be enforced with S3 bucket policies using secureTransport attributes.
      • S3 encryption at rest can be enforced with S3 bucket policies using x-amz-server-side-encryption attribute.
    • S3 features including
      • S3 provides cost-effective static website hosting. However, it does not support HTTPS endpoint. Can be integrated with CloudFront for HTTPS, caching, performance, and low-latency access.
      • S3 versioning provides protection against accidental overwrites and deletions. Used with MFA Delete feature.
      • S3 Pre-Signed URLs for both upload and download provide access without needing AWS credentials.
      • S3 CORS allows cross-domain calls
      • S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket.
      • S3 Event Notifications to trigger events on various S3 events like objects added or deleted. Supports SQS, SNS, and Lambda functions.
      • Integrates with Amazon Macie to detect PII data
      • Replication that supports the same and cross-region replication required versioning to be enabled.
      • Integrates with Athena to analyze data in S3 using standard SQL.
  • Instance Store
    •  is physically attached  to the EC2 instance and provides the lowest latency and highest IOPS
  • Elastic Block Storage – EBS
    • EBS volume types and their use cases in terms of IOPS and throughput. SSD for IOPS and HDD for throughput
  • Elastic File System – EFS
    • simple, fully managed, scalable, serverless, and cost-optimized file storage for use with AWS Cloud and on-premises resources.
    • provides shared volume across multiple EC2 instances, while EBS can be attached to a single instance within the same AZ or EBS Multi-Attach can be attached to multiple instances within the same AZ
    • can be mounted with Lambda functions
    • supports the NFS protocol, and is compatible with Linux-based AMIs
    • supports cross-region replication and storage classes for cost management.
  • Difference between EBS vs S3 vs EFS
  • Difference between EBS vs Instance Store
  • Would recommend referring Storage Options whitepaper, although a bit dated 90% still holds right

Security & Identity

  • Identity Access Management – IAM
    • IAM role
      • provides permissions that are not associated with a particular user, group, or service and are intended to be assumable by anyone who needs it.
      • can be used for EC2 application access and Cross-account access
    • IAM Best Practices
  • Cognito
    • provides authentication, authorization, and user management for the web and mobile apps.
    • User pools are user directories that provide sign-up and sign-in options for the app users.
    • Identity pools enable you to grant the users access to other AWS services.
  • Key Management Services – KMS encryption service
    • for key management and envelope encryption
    • provides encryption at rest and does not handle encryption in transit.
  • Amazon Certificate Manager – ACM
    • helps easily provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services and internally connected resources.
  • AWS Secrets Manager
    • helps protect secrets needed to access applications, services, and IT resources.
    • supports automatic rotations of secrets
  • Secrets Manager vs Systems Manager Parameter Store for secrets management
    • Secrets Manager supports automatic credentials rotation and is integrated with Lambda and other services like RDS, and DynamoDB.
    • Systems Manager Parameter Store provides free standard parameters and is cost-effective as compared to Secrets Manager.

Front-end Web and Mobile

  • API Gateway
    • is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale.
    • Powerful, flexible authentication mechanisms, such as AWS IAM policies, Lambda authorizer functions, and Amazon Cognito user pools.
    • supports Canary release deployments for safely rolling out changes.
    • define usage plans to meter, restrict third-party developer access, configure throttling, and quota limits on a per API key basis
    • integrates with AWS X-Ray for understanding and triaging performance latencies.
    • API Gateway CORS allows cross-domain calls
  • Amplify
    • is a complete solution that lets frontend web and mobile developers easily build, ship, and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as use cases evolve.

Management Tools

  • CloudWatch
    • monitoring to provide operational transparency
    • is extendable with custom metrics
    • does not capture memory metrics, by default, and can be done using the CloudWatch agent.
  • EventBridge
    • is a serverless event bus service that makes it easy to connect applications with data from a variety of sources.
    • enables building loosely coupled and distributed event-driven architectures.
  • CloudTrail
    • helps enable governance, compliance, and operational and risk auditing of the AWS account.
    • helps to get a history of AWS API calls and related events for the AWS account.
  • CloudFormation
    • easy way to create and manage a collection of related AWS resources, and provision and update them in an orderly and predictable fashion.
    • Supports Serverless Application Model – SAM for the deployment of serverless applications including Lambda.
    • CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and Regions with a single operation.

Integration Tools

  • Simple Queue Service
    • as message queuing service and SNS as pub/sub notification service
    • as a decoupling service and provide resiliency
    • SQS features like visibility, and long poll vs short poll
    • provide scaling for the Auto Scaling group based on the SQS size.
    • SQS Standard vs SQS FIFO difference
      • FIFO provides exactly-once delivery but with low throughput
  • Simple Notification Service – SNS
    • is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients
    • Fanout pattern can be used to push messages to multiple subscribers.
  • Understand SQS as a message queuing service and SNS as a pub/sub notification service.
  • Know AWS Developer tools
    • CodeCommit is a secure, scalable, fully-managed source control service that helps to host secure and highly scalable private Git repositories.
    • CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy.
    • CodeDeploy helps automate code deployments to any instance, including EC2 instances and instances running on-premises.
    • CodePipeline is a fully managed continuous delivery service that helps automate the release pipelines for fast and reliable application and infrastructure updates.
    • CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store, publish, and share software packages used in their software development process.
  • X-Ray
    • helps developers analyze and debug production, distributed applications for e.g. built using a microservices lambda architecture

Analytics

  • Redshift as a business intelligence tool
  • Kinesis
    • for real-time data capture and analytics.
    • Integrates with Lambda functions to perform transformations
  • AWS Glue
    • fully-managed, ETL service that automates the time-consuming steps of data preparation for analytics

Networking

  • Does not cover much networking or designing networks, but be sure you understand VPC, Subnets, Routes, Security Groups, etc.

AWS Cloud Computing Whitepapers

On the Exam Day

  • Make sure you are relaxed and get some good night’s sleep. The exam is not tough if you are well-prepared.
  • If you are taking the AWS Online exam
    • Try to join at least 30 minutes before the actual time as I have had issues with both PSI and Pearson with long wait times.
    • The online verification process does take some time and usually, there are glitches.
    • Remember, you would not be allowed to take the take if you are late by more than 30 minutes.
    • Make sure you have your desk clear, no hand-watches, or external monitors, keep your phones away, and nobody can enter the room.

Finally, All the Best 🙂

HashiCorp Certified Terraform Associate Learning Path

If you are working on an multi-cloud environment and focusing on automation, you would surely have been using Terraform or considered at some point of time. I have been using Terraform for over two years now for provisioning infrastructure on AWS, GCP and AliCloud right through development to production and it has been a wonderful DevOps journey and It was good to validate the Terraform skills through the Terraform Associate certification.

Terraform is for Cloud Engineers specializing in operations, IT, or development who know the basic concepts and skills associated with open source HashiCorp Terraform.

HashiCorp Certified Terraform Associate Exam Summary

  • HashiCorp Certified Terraform Associate exam focuses on Terraform as a Infrastructure as a Code tool
  • HashiCorp Certified Terraform Associate exam has 57 questions with a time limit of 60 minutes
  • Exam has a multi answer, multiple choice, fill in the blanks and True/False type of questions
  • Questions and answer options are pretty short and if you have experience on Terraform they are pretty easy and the time if more than sufficient.

HashiCorp Certified Terraform Associate Exam Topic Summary

Refer Terraform Cheat Sheet for details

Understand Infrastructure as Code (IaC) concepts

  • Explain what IaC is
    • Infrastructure is described using a high-level configuration syntax
    • IaC allows Infrastructure to be versioned and treated as you would any other code.
    • Infrastructure can be shared and re-used.
  • Describe advantages of IaC patterns
    • makes Infrastructure more reliable
    • makes Infrastructure more manageable
    • makes Infrastructure more automated and less error prone

Understand Terraform’s purpose (vs other IaC)

  • Explain multi-cloud and provider-agnostic benefits
    • using multi-cloud setup increases fault tolerance and reduces dependency on a single Cloud
    • Terraform provides a cloud-agnostic framework and allows a single configuration to be used to manage multiple providers, and to even handle cross-cloud dependencies.
    • Terraform simplifies management and orchestration, helping operators build large-scale multi-cloud infrastructures.
  • Explain the benefits of state
    • State is a necessary requirement for Terraform to function.
    • Terraform requires some sort of database to map Terraform config to the real world.
    • Terraform uses its own state structure for mapping configuration to resources in the real world
    • Terraform state helps
      • track metadata such as resource dependencies.
      • provides performance as it stores a cache of the attribute values for all resources in the state
      • aids syncing when using in team with multiple users

Understand Terraform basics

  • Handle Terraform and provider installation and versioning
    • Providers provide abstraction above the upstream API and is responsible for understanding API interactions and exposing resources.
    • Terraform configurations must declare which providers they require, so that Terraform can install and use them
    • Provider requirements are declared in a required_providers block.
  • Describe plugin based architecture
    • Terraform relies on plugins called “providers” to interact with remote systems.
  • Demonstrate using multiple providers
    • supports multiple provider instances using alias for e.g. multiple aws provides with different region
  • Describe how Terraform finds and fetches providers
    • Terraform finds and installs providers when initializing a working directory. It can automatically download providers from a Terraform registry, or load them from a local mirror or cache.
    • Each Terraform module must declare which providers it requires, so that Terraform can install and use them.
  • Explain when to use and not use provisioners and when to use local-exec or remote-exec
    • Terraform provides local-exec and remote-exec to execute tasks not provided by Terraform
      • local exec executes code on the machine running terraform
      • remote exec executes on the resource provisioned and supports ssh and winrm
    • Provisioners should only be used as a last resort.
    • are defined within the resource block.
    • support types – Create and Destroy
      • if creation time fails, resource is tainted if provisioning failed, by default. (next apply it will be re-created)
      • behavior can be overridden by setting the on_failure to continue, which means ignore and continue
      • for destroy, if it fails – resources are not removed

Use the Terraform CLI (outside of core workflow)

  • Given a scenario: choose when to use terraform fmt to format code
    • terraform fmt helps format code to lint into a standard format. It usually aligns the spaces and matches the =
  • Given a scenario: choose when to use terraform taint to taint Terraform resources
    • terraform taint marks a Terraform-managed resource as tainted, forcing it to be destroyed and recreated on the next apply.
    • will not modify infrastructure, but does modify the state file in order to mark a resource as tainted.
    • Infrastructure and state are changed in next apply.
    • can be used to taint a resource within a module
  • Given a scenario: choose when to use terraform import to import existing infrastructure into your Terraform state
    • terraform import helps import already-existing external resources, not managed by Terraform, into Terraform state and allow it to manage those resources
    • Terraform is not able to auto-generate configurations for those imported modules, for now, and requires you to first write the resource definition in Terraform and then import this resource
  • Given a scenario: choose when to use terraform workspace to create workspaces
    • Terraform workspace helps manage multiple distinct sets of infrastructure resources or environments with the same code.
    • state files for each workspace are stored in the directory terraform.tfstate.d
    • terraform workspace new dev creates a new workspace with name dev and switches to it as well
    • does not provide strong separation as it uses the same backend
  • Given a scenario: choose when to use terraform state to view Terraform state
    • state helps keep track of the infrastructure Terraform manages
    • stored locally in the terraform.tfstate
    • recommended not to edit the state manually
    • Use terraform state command
      • mv – to move/rename modules
      • rm – to safely remove resource from the state. (destroy/retain like)
      • pull – to observe current remote state
      • list & show – to write/debug modules
  • Given a scenario: choose when to enable verbose logging and what the outcome/value is
    • debugging can be controlled using TF_LOG , which can be configured for different levels TRACE, DEBUG, INFO, WARN or ERROR, with TRACE being the more verbose.
    • logs path can be controlled TF_LOG_PATHTF_LOG needs to be specified.

Interact with Terraform modules

  • Contrast module source options
    • Terraform Module Registry allows you to browse, filter and search for modules
  • Interact with module inputs and outputs
    • Input variables serve as parameters for a Terraform module, allowing aspects of the module to be customized without altering the module’s own source code, and allowing modules to be shared between different configurations.
    • Resources defined in a module are encapsulated, so the calling module cannot access their attributes directly.
    • Child module can declare output values to selectively export certain values to be accessed by the calling module module.module_name.output_value
  • Describe variable scope within modules/child modules
    • Modules are called from within other modules using module blocks
    • All modules require a source argument, which is a meta-argument defined by Terraform
    • To call a module means to include the contents of that module into the configuration with specific values for its input variables.
  • Discover modules from the public Terraform Module Registry
    • Terraform Module Registry allows you to browse, filter and search for modules
  • Defining module version
    • must be on GitHub and must be a public repo, if using public registry.
    • must be named terraform-<PROVIDER>-<NAME>, where <NAME> reflects the type of infrastructure the module manages and <PROVIDER> is the main provider where it creates that infrastructure. for e.g. terraform-google-vault or terraform-aws-ec2-instance.
    • must maintain x.y.z tags for releases to identify module versions. and can optionally be prefixed with a v for example, v1.0.4 and 0.9.2. Tags that don’t look like version numbers are ignored.
    • must maintain a Standard module structure, which allows the registry to inspect the module and generate documentation, track resource usage, parse submodules and examples, and more.

Navigate Terraform workflow

  • Describe Terraform workflow ( Write -> Plan -> Create )
    • Core Terraform workflow has three steps:
      • Write – Author infrastructure as code.
      • Plan – Preview changes before applying.
      • Apply – Provision reproducible infrastructure.
  • Initialize a Terraform working directory terraform init
    • initializes a working directory containing Terraform configuration files.
    • performs backend initialization, modules and plugins installation.
    • plugins are downloaded in the sub-directory of the present working directory at the path of .terraform/plugins
    • does not delete the existing configuration or state
  • Validate a Terraform configuration terraform validate
    • validates the configuration files in a directory, referring only to the configuration and not accessing any remote services such as remote state, provider APIs, etc.
    • verifies whether a configuration is syntactically valid and internally consistent, regardless of any provided variables or existing state.
    • useful for general verification of reusable modules, including the correctness of attribute names and value types.
  • Generate and review an execution plan for Terraform terraform plan
    • terraform plan create a execution plan as it traverses each vertex and requests each provider using parallelism
    • calculates the difference between the last-known state and the current state and presents this difference as the output of the terraform plan operation to user in their terminal
    • does not modify the infrastructure or state.
    • allows a user to see which actions Terraform will perform prior to making any changes to reach the desired state
    • performs refresh for each resource and might hit rate limiting issues as it calls provider APIs
    • all resources refresh can be disabled or avoided using
      • -refresh=false or
      • target=xxxx or
      • break resources into different directories.
  • Execute changes to infrastructure with Terraform terraform apply
    • will always ask for confirmation before executing unless passed the -auto-approve flag.
    • if a resource successfully creates but fails during provisioning, Terraform will error and mark the resource as “tainted”. Terraform does not roll back the changes
  • Destroy Terraform managed infrastructure terraform destroy
    • will always ask for confirmation before executing unless passed the -auto-approve flag.

Implement and maintain state

  • Describe default local backend
    • A “backend” in Terraform determines how state is loaded and how an operation such as apply is executed. This abstraction enables non-local file state storage, remote execution, etc.
    • determines how state is loaded and how an operation such as apply is executed
    • is responsible for storing state and providing an API for optional state locking
    • needs to be initialized
    • helps
      • collaboration and working as a team, with the state maintained remotely and state locking
      • can provide enhanced security for sensitive data
      • support remote operations
    • local (default) backend stores state in a local JSON file on disk
  • Outline state locking
    • happens for all operations that could write state, if supported by backend for e.g. S3 with DynamoDB, Consul etc.
    • prevents others from acquiring the lock & potentially corrupting the state
    • use force-unlock command to manually unlock the state if unlocking failed
    • backends which support state locking are
      • azurerm
      • Hashicorp consul
      • Tencent Cloud Object Storage (COS)
      • etcdv3
      • Google Cloud Storage GCS
      • HTTP endpoints
      • Kubernetes Secret with locking done using a Lease resource
      • AliCloud Object Storage OSS with locking via TableStore
      • PostgreSQL
      • AWS S3 with locking via DynamoDB
      • Terraform Enterprise
    • Backends which do not support state locking are
      • artifactory
      • etcd
  • Handle backend authentication methods
    • every remote backend support different authentication mechanism and can be configured with the backend configuration
  • Describe remote state storage mechanisms and supported standard backends
    • remote backend stores state remotely like S3, OSS, GCS, Consul and support features like remote operation, state locking, encryption, versioning etc.
    • github is not a supported backend type.
  • Describe effect of Terraform refresh on state
    • terraform refreshis used to reconcile the state Terraform knows about (via its state file) with the real-world infrastructure.
    • can be used to detect any drift from the last-known state, and to update the state file.
    • does not modify infrastructure but does modify the state file.
  • Describe backend block in configuration and best practices for partial configurations
    • Backend configuration doesn’t support interpolations.
    • supports partial configuration with remaining configuration arguments provided as part of the initialization process
    • if switching the backed for the first time setup, Terraform provides a migration option
  • Understand secret management in state files
    • terraform state command is used for advanced state management
    • Terraform has no mechanism to redact or protect secrets that are returned via data sources, so secrets read via this provider will be persisted into the Terraform state, into any plan files, and in some cases in the console output produced while planning and applying.
    • can be protected accordingly either by using Vault and remote backends with encryption and proper access control

Read, generate, and modify configuration

  • Demonstrate use of variables and outputs
    • Variables
      • serve as parameters for a Terraform module and
      • act like function arguments
      • count is a reserved word and cannot be used as variable name
    • Output
      • are like function return values.
      • can be marked sensitive which prevents showing its value in the list of outputs. However, they are stored in the state as plain text.
  • Describe secure secret injection best practice
  • Understand the use of collection and structural types
    • supports primitive data types of
      • string, number and bool
      • automatically convert number and bool values to string values
    • supports complex data types of
      • list – sequence of values identified by consecutive whole numbers starting with zero.
      • map – collection of values where each is identified by a string label
      • set – collection of unique values that do not have any secondary identifiers or ordering.
    • supports structural data types of
      • object – a collection of named attributes with their own type
      • tuple – a sequence of elements identified by consecutive whole numbers starting with zero, where each element has its own type.
  • Create and differentiate resource and data configuration
    • Resources describe one or more infrastructure objects, such as virtual networks, instances, or higher-level components such as DNS records.
    • Data sources allow data to be fetched or computed for use elsewhere in Terraform configuration. Use of data sources allows a Terraform configuration to make use of information defined outside of Terraform, or defined by another separate Terraform configuration.
  • Use resource addressing and resource parameters to connect resources together
  • Use Terraform built-in functions to write configuration
    • lookup retrieves the value of a single element from a map, given its key. If the given key does not exist, a the given default value is returned instead. lookup(map, key, default)
    • zipmap constructs a map from a list of keys and a corresponding list of values. A map is denoted by { } whereas a list is donated by [ ] for e.g. zipmap(["a", "b"], [1, 2]) results into {"a" = 1, "b" = 2}
  • Configure resource using a dynamic block
    • dynamic acts much like a for expression, but produces nested blocks instead of a complex typed value. It iterates over a given complex value, and generates a nested block for each element of that complex value.
    • Overuse of dynamic block is not recommended as it makes the code hard to understand and debug
  • Describe built-in dependency management (order of execution based)
    • Terraform analyses any expressions within a resource block to find references to other objects and treats those references as implicit ordering requirements when creating, updating, or destroying resources.
    • Explicit dependency can be defined using the depends_on attribute where dependencies between resources that are not visible
  • support comments using #, // and /* */

Understand Terraform Cloud and Enterprise capabilities

  • Describe the benefits of Sentinel, registry, and workspaces
    • Terraform Cloud provides private module registry for storing modules private to be used within the organization
  • Differentiate OSS and TFE workspaces
  • Summarize features of Terraform Cloud
    • Terraform Enterprise currently supports running under the following operating systems for a Clustered deployment:
      • Ubuntu 16.04.3 – 16.04.5 / 18.04
      • Red Hat Enterprise Linux 7.4 through 7.7
      • CentOS 7.4 – 7.7
      • Amazon Linux
      • Oracle Linux
      • Clusters currently don’t support other Linux variants.
    • Terraform Enterprise install that is provisioned on a network that does not have Internet access is generally known as an air-gapped install.

HashiCorp Certified Terraform Associate Exam Resources

Terraform Cheat Sheet

  • An open source provisioning declarative tool that based on Infrastructure as a Code paradigm
  • designed on immutable infrastructure principles
  • Written in Golang and uses own syntax – HCL (Hashicorp Configuration Language), but also supports JSON
  • Helps to evolve the infrastructure, safely and predictably
  • Applies Graph Theory to IaaC and provides Automation, Versioning and Reusability
  • Terraform is a multipurpose composition tool:
    ○ Composes multiple tiers (SaaS/PaaS/IaaS)
    ○ A plugin-based architecture model
  • Terraform is not a cloud agnostic tool. It embraces all major Cloud Providers and provides common language to orchestrate the infrastructure resources
  • Terraform is not a configuration management tool and other tools like chef, ansible exists in the market.

Terraform Architecture

Terraform Architecture

Terraform Providers (Plugins)

  • provide abstraction above the upstream API and is responsible for understanding API interactions and exposing resources.
  • Invoke only upstream APIs for the basic CRUD operations
  • Providers are unaware of anything related to configuration loading, graph
    theory, etc.
  • supports multiple provider instances using alias for e.g. multiple aws provides with different region
  • can be integrated with any API using providers framework
  • Most providers configure a specific infrastructure platform (either cloud or self-hosted).
  • can also offer local utilities for tasks like generating random numbers for unique resource names.

Terraform Provisioners

  • run code locally or remotely on resource creation
    • local exec executes code on the machine running terraform
    • remote exec
      • runs on the provisioned resource
      • supports ssh and winrm
    • requires inline list of commands
  • should be used as a last resort
  • are defined within the resource block.
  • support types – Create and Destroy
    • if creation time fails, resource is tainted if provisioning failed, by default. (next apply it will be re-created)
    • behavior can be overridden by setting the on_failure to continue, which means ignore and continue
    • for destroy, if it fails – resources are not removed

Terraform Workspaces

  • helps manage multiple distinct sets of infrastructure resources or environments with the same code.
  • just need to create needed workspace and use them, instead of creating a directory for each environment to manage
  • state files for each workspace are stored in the directory terraform.tfstate.d
  • terraform workspace new dev creates a new workspace and switches to it as well
  • terraform workspace select dev helps select workspace
  • terraform workspace list lists the workspaces and shows the current active one with *
  • does not provide strong separation as it uses the same backend

Terraform Workflow

Terraform Workflow

init

  • initializes a working directory containing Terraform configuration files.
  • performs
    • backend initialization , storage for terraform state file.
    • modules installation, downloaded from terraform registry to local path
    • provider(s) plugins installation, the plugins are downloaded in the sub-directory of the present working directory at the path of .terraform/plugins
  • supports -upgrade to update all previously installed plugins to the newest version that complies with the configuration’s version constraints
  • is safe to run multiple times, to bring the working directory up to date with changes in the configuration
  • does not delete the existing configuration or state

validate

  • validates syntactically for format and correctness.
  • is used to validate/check the syntax of the Terraform files.
  • verifies whether a configuration is syntactically valid and internally consistent, regardless of any provided variables or existing state.
  • A syntax check is done on all the terraform files in the directory, and will display an error if any of the files doesn’t validate.

plan

  • create a execution plan
  • traverses each vertex and requests each provider using parallelism
  • calculates the difference between the last-known state and
    the current state and presents this difference as the output of the terraform plan operation to user in their terminal
  • does not modify the infrastructure or state.
  • allows a user to see which actions Terraform will perform prior to making any changes to reach the desired state
  • will scan all *.tf  files in the directory and create the plan
  • will perform refresh for each resource and might hit rate limiting issues as it calls provider APIs
  • all resources refresh can be disabled or avoided using
    • -refresh=false or
    • target=xxxx or
    • break resources into different directories.
  • supports -out to save the plan

apply

  • apply changes to reach the desired state.
  • scans the current directory for the configuration and applies the changes appropriately.
  • can be provided with a explicit plan, saved as out from terraform plan
  • If no explicit plan file is given on the command line, terraform apply will create a new plan automatically and prompt for approval to apply it
  • will modify the infrastructure and the state.
  • if a resource successfully creates but fails during provisioning,
    • Terraform will error and mark the resource as “tainted”.
    • A resource that is tainted has been physically created, but can’t be considered safe to use since provisioning failed.
    • Terraform also does not automatically roll back and destroy the resource during the apply when the failure happens, because that would go against the execution plan: the execution plan would’ve said a resource will be created, but does not say it will ever be deleted.
  • does not import any resource.
  • supports -auto-approve to apply the changes without asking for a confirmation
  • supports -target to apply a specific module

refresh

  • used to reconcile the state Terraform knows about (via its state file) with the real-world infrastructure
  • does not modify infrastructure, but does modify the state file

destroy

  • destroy the infrastructure and all resources
  • modifies both state and infrastructure
  • terraform destroy -target can be used to destroy targeted resources
  • terraform plan -destroy allows creation of destroy plan

import

  • helps import already-existing external resources, not managed by Terraform, into Terraform state and allow it to manage those resources
  • Terraform is not able to auto-generate configurations for those imported modules, for now, and requires you to first write the resource definition in Terraform and then import this resource

taint

  • marks a Terraform-managed resource as tainted, forcing it to be destroyed and recreated on the next apply.
  • will not modify infrastructure, but does modify the state file in order to mark a resource as tainted. Infrastructure and state are changed in next apply.
  • can be used to taint a resource within a module

fmt

  • format to lint the code into a standard format

console

  • command provides an interactive console for evaluating expressions.

Terraform Modules

  • enables code reuse
  • supports versioning to maintain compatibility
  • stores code remotely
  • enables easier testing
  • enables encapsulation with all the separate resources under one configuration block
  • modules can be nested inside other modules, allowing you to quickly spin up whole separate environments.
  • can be referred using source attribute
  • supports Local and Remote modules
    • Local modules are stored alongside the Terraform configuration (in a separate directory, outside of each environment but in the same repository) with source path ./ or ../
    • Remote modules are stored externally in a separate repository, and supports versioning
  • supports following backends
    • Local paths
    • Terraform Registry
    • GitHub
    • Bitbucket
    • Generic Git, Mercurial repositories
    • HTTP URLs
    • S3 buckets
    • GCS buckets
  • Module requirements
    • must be on GitHub and must be a public repo, if using public registry.
    • must be named terraform-<PROVIDER>-<NAME>, where <NAME> reflects the type of infrastructure the module manages and <PROVIDER> is the main provider where it creates that infrastructure. for e.g. terraform-google-vault or terraform-aws-ec2-instance.
    • must maintain x.y.z tags for releases to identify module versions. Release tag names must be a semantic version, which can optionally be prefixed with a v for example, v1.0.4 and 0.9.2. Tags that don’t look like version numbers are ignored.
    • must maintain a Standard module structure, which allows the registry to inspect the module and generate documentation, track resource usage, parse submodules and examples, and more.

Terraform Read and write configuration

terraform_sample

  • Resources
    • resource are the most important element in the Terraform language that describes one or more infrastructure objects, such as compute instances etc
    • resource type and local name together serve as an identifier for a given resource and must be unique within a module for e.g.  aws_instance.local_name
  • Data Sources
    • data allow data to be fetched or computed for use elsewhere in Terraform configuration
    • allows a Terraform configuration to make use of information defined outside of Terraform, or defined by another separate Terraform configuration
  • Variables
    • variable serve as parameters for a Terraform module and act like function arguments
    • allows aspects of the module to be customized without altering the module’s own source code, and allowing modules to be shared between different configurations
    • can be defined through multiple ways
      • command line for e.g.-var="image_id=ami-abc123"
      • variable definition files .tfvars or .tfvars.json. By default, terraform automatically loads
        • Files named exactly terraform.tfvars or terraform.tfvars.json.
        • Any files with names ending in .auto.tfvars or .auto.tfvars.json
        • file can also be passed with -var-file
      • environment variables can be used to set variables using the format TF_VAR_name
      • Environment variables
      • terraform.tfvars file, if present.
      • terraform.tfvars.json file, if present.
      • Any *.auto.tfvars or *.auto.tfvars.json files, processed in lexical order of their filenames.
      • Any -var and -var-file options on the command line, in the order they are provided.Terraform loads variables in the following order, with later sources taking precedence over earlier ones:
  • Local Values
    • locals assigns a name to an expression, allowing it to be used multiple times within a module without repeating it.
    • are like a function’s temporary local variables.
    • helps to avoid repeating the same values or expressions multiple times in a configuration.
  • Output
    • are like function return values.
    • output can be marked as containing sensitive material using the optional sensitive argument, which prevents Terraform from showing its value in the list of outputs. However, they are still stored in the state as plain text.
    • In a parent module, outputs of child modules are available in expressions as module.<MODULE NAME>.<OUTPUT NAME>.
  • Named Values
    • is an expression that references the associated value for e.g. aws_instance.local_name, data.aws_ami.centos, var.instance_type etc.
    • support Local named values for e.g count.index
  • Dependencies
    • identifies implicit dependencies as Terraform automatically infers when one resource depends on another by studying the resource attributes used in interpolation expressions for e.g aws_eip on resource aws_instance
    • explicit dependencies can be defined using depends_on where dependencies between resources that are not visible to Terraform
  • Data Types
    • supports primitive data types of
      • string, number and bool
      • Terraform language will automatically convert number and bool values to string values when needed
    • supports complex data types of
      • list – a sequence of values identified by consecutive whole numbers starting with zero.
      • map – a collection of values where each is identified by a string label.
      • set –  a collection of unique values that do not have any secondary identifiers or ordering.
    • supports structural data types of
      • object – a collection of named attributes that each have their own type
      • tuple – a sequence of elements identified by consecutive whole numbers starting with zero, where each element has its own type.
  • Built-in Functions
    • includes a number of built-in functions that can be called from within expressions to transform and combine values for e.g. min, max, file, concat, element, index, lookup etc.
    • does not support user-defined functions
  • Dynamic Blocks
    • acts much like a for expression, but produces nested blocks instead of a complex typed value. It iterates over a given complex value, and generates a nested block for each element of that complex value.
  • Terraform Comments
    • supports three different syntaxes for comments:
      • #
      • //
      • /* and */

Terraform Backends

  • determines how state is loaded and how an operation such as apply is executed
  • are responsible for storing state and providing an API for optional state locking
  • needs to be initialized
  • if switching the backed for the first time setup, Terraform provides a migration option
  • helps
    • collaboration and working as a team, with the state maintained remotely and state locking
    • can provide enhanced security for sensitive data
    • support remote operations
  • supports local vs remote backends
    • local (default) backend stores state in a local JSON file on disk
    • remote backend stores state remotely like S3, OSS, GCS, Consul and support features like remote operation, state locking, encryption, versioning etc.
  • supports partial configuration with remaining configuration arguments provided as part of the initialization process
  • Backend configuration doesn’t support interpolations.
  • GitHub is not the supported backend type in Terraform.

Terraform State Management

  • state helps keep track of the infrastructure Terraform manages
  • stored locally in the terraform.tfstate
  • recommended not to edit the state manually
  • Use terraform state command
    • mv – to move/rename modules
    • rm – to safely remove resource from the state. (destroy/retain like)
    • pull – to observe current remote state
    • list & show – to write/debug modules

State Locking

  • happens for all operations that could write state, if supported by backend
  • prevents others from acquiring the lock & potentially corrupting the state
  • backends which support state locking are
    • azurerm
    • Hashicorp consul
    • Tencent Cloud Object Storage (COS)
    • etcdv3
    • Google Cloud Storage GCS
    • HTTP endpoints
    • Kubernetes Secret with locking done using a Lease resource
    • AliCloud Object Storage OSS with locking via TableStore
    • PostgreSQL
    • AWS S3 with locking via DynamoDB
    • Terraform Enterprise
  • Backends which do not support state locking are
    • artifactory
    • etcd
  • can be disabled for most commands with the -lock flag
  • use force-unlock command to manually unlock the state if unlocking failed

State Security

  • can contain sensitive data, depending on the resources in use for e.g passwords and keys
  • using local state, data is stored in plain-text JSON files
  • using remote state, state is held in memory when used by Terraform. It may be encrypted at rest, if supported by backend for e.g. S3, OSS

Terraform Logging

  • debugging can be controlled using TF_LOG , which can be configured for different levels TRACE, DEBUG, INFO, WARN or ERROR, with TRACE being the more verbose.
  • logs path can be controlled TF_LOG_PATHTF_LOG needs to be specified.

Terraform Cloud and Terraform Enterprise

  • Terraform Cloud provides Cloud Infrastructure Automation as a Service. It is offered as a multi-tenant SaaS platform and is designed to suit the needs of smaller teams and organizations. Its smaller plans default to one run at a time, which prevents users from executing multiple runs concurrently.
  • Terraform Enterprise is a private install for organizations who prefer to self-manage. It is designed to suit the needs of organizations with specific requirements for security, compliance and custom operations.
  • Terraform Cloud provides features
    • Remote Terraform Execution – supports Remote Operations for Remote Terraform execution which helps provide consistency and visibility for critical provisioning operations.
    • Workspaces – organizes infrastructure with workspaces instead of directories. Each workspace contains everything necessary to manage a given collection of infrastructure, and Terraform uses that content whenever it executes in the context of that workspace.
    • Remote State Management – acts as a remote backend for the Terraform state. State storage is tied to workspaces, which helps keep state associated with the configuration that created it.
    • Version Control Integration – is designed to work directly with the version control system (VCS) provider.
    • Private Module Registry – provides a private and central library of versioned & validated modules to be used within the organization
    • Team based Permission System – can define groups of users that match the organization’s real-world teams and assign them only the permissions they need
    • Sentinel Policies – embeds the Sentinel policy-as-code framework, which lets you define and enforce granular policies for how the organization provisions infrastructure. Helps eliminate provisioned resources that don’t follow security, compliance, or operational policies.
    • Cost Estimation – can display an estimate of its total cost, as well as any change in cost caused by the proposed updates
    • Security – encrypts state at rest and protects it with TLS in transit.
  • Terraform Enterprise features
    • includes all the Terraform Cloud features with
    • Audit – supports detailed audit logging and tracks the identity of the user requesting state and maintains a history of state changes.
    • SSO/SAML – SAML for SSO provides the ability to govern user access to your applications.
  • Terraform Enterprise currently supports running under the following operating systems for a Clustered deployment:
    • Ubuntu 16.04.3 – 16.04.5 / 18.04
    • Red Hat Enterprise Linux 7.4 through 7.7
    • CentOS 7.4 – 7.7
    • Amazon Linux
    • Oracle Linux
    • Clusters currently don’t support other Linux variants.
  • Terraform Cloud currently supports following VCS Provider
    • GitHub.com
    • GitHub.com (OAuth)
    • GitHub Enterprise
    • GitLab.com
    • GitLab EE and CE
    • Bitbucket Cloud
    • Bitbucket Server
    • Azure DevOps Server
    • Azure DevOps Services
  • A Terraform Enterprise install that is provisioned on a network that does not have Internet access is generally known as an air-gapped install. These types of installs require you to pull updates, providers, etc. from external sources vs. being able to download them directly.

AWS Certified Solutions Architect – Associate SAA-C02 Exam Learning Path

SAA-C02 Certification

AWS Certified Solutions Architect – Associate SAA-C02 Exam Learning Path

AWS Solutions Architect – Associate SAA-C02 exam is the latest AWS exam that has replaced the previous SAA-C01 certification exam. It basically validates the ability to effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS technologies

  • Define a solution using architectural design principles based on customer requirements.
  • Provide implementation guidance based on best practices to the organization throughout the life cycle of the project.

Refer AWS_Solution_Architect_-_Associate_SAA-C02_Exam_Blue_Print

AWS Solutions Architect – Associate SAA-C02 Exam Summary

  • SAA-C02 exam consists of 65 questions in 130 minutes, and the time is more than sufficient if you are well prepared.
  • SAA-C02 Exam covers the architecture aspects in deep, so you must be able to visualize the architecture, even draw them out in the exam just to understand how it would work and how different services relate.
  • AWS has updated the exam concepts from the focus being on individual services to more building of scalable, highly available, cost-effective, performant, resilient.
  • If you had been preparing for the SAA-C01 –
    • SAA-C02 is pretty much similar to SAA-C01 except the operational effective architecture domain has been dropped
    • Although, most of the services and concepts covered by the SAA-C01 are the same. There are few new additions like Aurora Serverless, AWS Global Accelerator, FSx for Windows, FSx for Lustre
  • AWS exams are available online, and I took the online one. Just make sure you have a proper place to take the exam with no disturbance and nothing around you.
  • Also, if you are taking the AWS Online exam for the first time try to join atleast 30 minutes before the actual time.

AWS Solutions Architect – Associate SAA-C02 Exam Resources

AWS Solutions Architect – Associate SAA-C02 Exam Topics

Make sure you go through all the topics and focus on hints in italics

Networking

  • Be sure to create VPC from scratch. This is mandatory.
    • Create VPC and understand whats an CIDR and addressing patterns
    • Create public and private subnets, configure proper routes, security groups, NACLs. (hint: Subnets are public or private depending on whether they can route traffic directly through Internet gateway)
    • Create Bastion for communication with instances
    • Create NAT Gateway or Instances for instances in private subnets to interact with internet
    • Create two tier architecture with application in public and database in private subnets
    • Create three tier architecture with web servers in public, application and database servers in private. (hint: focus on security group configuration with least privilege)
    • Make sure to understand how the communication happens between Internet, Public subnets, Private subnets, NAT, Bastion etc.
  • Understand difference between Security Groups and NACLs (hint: Security Groups are Stateful vs NACLs are stateless. Also only NACLs provide an ability to deny or block IPs)
  • Understand VPC endpoints and what services it can help interact (hint: VPC Endpoints routes traffic internally without Internet)
    • VPC Gateway Endpoints supports S3 and DynamoDB.
    • VPC Interface Endpoints OR Private Links supports others
  • Understand difference between NAT Gateway and NAT Instance (hint: NAT Gateway is AWS managed and is scalable and highly available)
  • Understand how NAT high availability can be achieved (hint: provision NAT in each AZ and route traffic from subnets within that AZ through that NAT Gateway)
  • Understand VPN and Direct Connect for on-premises to AWS connectivity
    • VPN provides quick connectivity, cost-effective, secure channel, however routes through internet and does not provide consistent throughput
    • Direct Connect provides consistent dedicated throughput without Internet, however requires time to setup and is not cost-effective
  • Understand Data Migration techniques
    • Choose Snowball vs Snowmobile vs Direct Connect vs VPN depending on the bandwidth available, data transfer needed, time available, encryption requirement, one-time or continuous requirement
    • Snowball, SnowMobile are for one-time data, cost-effective, quick and ideal for huge data transfer
    • Direct Connect, VPN are ideal for continuous or frequent data transfers
  • Understand CloudFront as CDN and the static and dynamic caching it provides, what can be its origin (hint: CloudFront can point to on-premises sources and its usecases with S3 to reduce load and cost)
  • Understand Route 53 for routing
    • Understand Route 53 health checks and failover routing
    • Understand  Route 53 Routing Policies it provides and their use cases mainly for high availability (hint: focus on weighted, latency, geolocation, failover routing)
  • Be sure to cover ELB concepts in deep.
    • SAA-C02 focuses on ALB and NLB and does not cover CLB
    • Understand differences between  CLB vs ALB vs NLB
      • ALB is layer 7 while NLB is layer 4
      • ALB provides content based, host based, path based routing
      • ALB provides dynamic port mapping which allows same tasks to be hosted on ECS node
      • NLB provides low latency and ability to scale
      • NLB provides static IP address

Security

  • Understand IAM as a whole
    • Focus on IAM role (hint: can be used for EC2 application access and Cross-account access)
    • Understand IAM identity providers and federation and use cases
    • Understand MFA and how would implement two factor authentication for an application
    • Understand IAM Policies (hint: expect couple of questions with policies defined and you need to select correct statements)
  • Understand encryption services
  • AWS WAF integrates with CloudFront to provide protection against Cross-site scripting (XSS) attacks. It also provide IP blocking and geo-protection.
  • AWS Shield integrates with CloudFront to provide protection against DDoS.
  • Refer Disaster Recovery whitepaper, be sure you know the different recovery types with impact on RTO/RPO.

Storage

  • Understand various storage options S3, EBS, Instance store, EFS, Glacier, FSx and what are the use cases and anti patterns for each
  • Instance Store
    • Understand Instance Store (hint: it is physically attached  to the EC2 instance and provides the lowest latency and highest IOPS)
  • Elastic Block Storage – EBS
    • Understand various EBS volume types and their use cases in terms of IOPS and throughput. SSD for IOPS and HDD for throughput
    • Understand Burst performance and I/O credits to handle occasional peaks
    • Understand EBS Snapshots (hint: backups are automated, snapshots are manual
  • Simple Storage Service – S3
    • Cover S3 in depth
    • Understand S3 storage classes with lifecycle policies
      • Understand the difference between SA Standard vs SA IA vs SA IA One Zone in terms of cost and durability
    • Understand S3 Data Protection (hint: S3 Client side encryption encrypts data before storing it in S3)
    • Understand S3 features including
      • S3 provides a cost effective static website hosting
      • S3 versioning provides protection against accidental overwrites and deletions
      • S3 Pre-Signed URLs for both upload and download provides access without needing AWS credentials
      • S3 CORS allows cross domain calls
      • S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket.
    • Understand Glacier as an archival storage with various retrieval patterns
    • Glacier Expedited retrieval now allows object retrieval within mins
  • Understand Storage gateway and its different types.
    • Cached Volume Gateway provides access to frequently accessed data, while using AWS as the actual storage
    • Stored Volume gateway uses AWS as a backup, while the data is being stored on-premises as well
    • File Gateway supports SMB protocol
  • Understand FSx easy and cost effective to launch and run popular file systems.
  • Understand the difference between EBS vs S3 vs EFS
    • EFS provides shared volume across multiple EC2 instances, while EBS can be attached to a single volume within the same AZ.
  • Understand the difference between EBS vs Instance Store
  • Would recommend referring Storage Options whitepaper, although a bit dated 90% still holds right

Compute

  • Understand Elastic Cloud Compute – EC2
  • Understand Auto Scaling and ELB, how they work together to provide High Available and Scalable solution. (hint: Span both ELB and Auto Scaling across Multi-AZs to provide High Availability)
  • Understand EC2 Instance Purchase Types – Reserved, Scheduled Reserved, On-demand and Spot and their use cases
    • Choose Reserved Instances for continuous persistent load
    • Choose Scheduled Reserved Instances for load with fixed scheduled and time interval
    • Choose Spot instances for fault tolerant and Spiky loads
    • Reserved instances provides cost benefits for long terms requirements over On-demand instances
    • Spot instances provides cost benefits for temporary fault tolerant spiky load
  • Understand EC2 Placement Groups (hint: Cluster placement groups provide low latency and high throughput communication, while Spread placement group provides high availability)
  • Understand Lambda and serverless architecture, its features and use cases. (hint: Lambda integrated with API Gateway to provide a serverless, highly scalable, cost-effective architecture)
  • Understand ECS with its ability to deploy containers and micro services architecture.
    • ECS role for tasks can be provided through taskRoleArn
    • ALB provides dynamic port mapping to allow multiple same tasks on the same node
  • Know Elastic Beanstalk at a high level, what it provides and its ability to get an application running quickly.

Databases

  • Understand relational and NoSQLs data storage options which include RDS, DynamoDB, Aurora and their use cases
  • RDS
    • Understand RDS features – Read Replicas vs Multi-AZ
      • Read Replicas for scalability, Multi-AZ for High Availability
      • Multi-AZ are regional only
      • Read Replicas can span across regions and can be used for disaster recovery
    • Understand Automated Backups, underlying volume types
  • Aurora
    • Understand Aurora
      • provides multiple read replicas and replicates 6 copies of data across AZs
    • Understand Aurora Serverless provides a highly scalable cost-effective database solution
  • DynamoDB
    • Understand DynamoDB with its low latency performance, key-value store (hint: DynamoDB is not a relational database)
    • DynamoDB DAX provides caching for DynamoDB
    • Understand DynamoDB provisioned throughput for Read/Writes (It is more cover in Developer exam though.)
  • Know ElastiCache use cases, mainly for caching performance

Integration Tools

  • Understand SQS as message queuing service and SNS as pub/sub notification service
  • Understand SQS features like visibility, long poll vs short poll
  • Focus on SQS as a decoupling service
  • Understand SQS Standard vs SQS FIFO difference (hint: FIFO provides exactly once delivery both low throughput)

Analytics

  • Know Redshift as a business intelligence tool
  • Know Kinesis for real time data capture and analytics
  • Atleast know what AWS Glue does, so you can eliminate the answer

Management Tools

  • Understand CloudWatch monitoring to provide operational transparency
  • Know which EC2 metrics it can track. Remember, it cannot track memory and disk space/swap utilization
  • Understand CloudWatch is extendable with custom metrics
  • Understand CloudTrail for Audit
  • Have a basic understanding of CloudFormation, OpsWorks

AWS Whitepapers & Cheat sheets

AWS Solutions Architect – Associate Exam Domains

Domain 1: Design Resilient Architectures

  1. Design a multi-tier architecture solution
  2. Design highly available and/or fault-tolerant architectures
  3. Design decoupling mechanisms using AWS services
  4. Choose appropriate resilient storage

Domain 2: Define High-Performing Architectures

  1. Identify elastic and scalable compute solutions for a workload
  2. Select high-performing and scalable storage solutions for a workload
  3. Select high-performing networking solutions for a workload
  4. Choose high-performing database solutions for a workload

Domain 3: Specify Secure Applications and Architectures

  1. Design secure access to AWS resources
  2. Design secure application tiers
  3. Select appropriate data security options

Domain 4: Design Cost-Optimized Architectures

  1. Determine how to design cost-optimized storage.
  2. Determine how to design cost-optimized compute.

Google Cloud – Associate Cloud Engineer Certification learning path

Google Cloud - Associate Cloud Engineer

Google Cloud – Associate Cloud Engineer Certification learning path

Google Cloud – Associate Cloud Engineer certification exam is basically for one who works day-in day-out with the Google Cloud Services. It targets an Cloud Engineer who deploys applications, monitors operations, and manages enterprise solutions. The exam makes sure it covers gamut of services and concepts. Although, the exam is not that tough and time available of 2 hours a quite plenty, if you well prepared.

Google Cloud – Associate Cloud Engineer Certification Summary

  • Has 50 questions to be answered in 2 hours.
  • Covers wide range of Google Cloud services and what they actually do. It focuses heavily on IAM, Compute, Storage with a little bit of Network but hardly any data services.
  • Hands-on is a must. Covers Cloud SDK, CLI commands and Console operations that you would use for day-to-day work. If you have not worked on GCP before make sure you do lot of labs else you would be absolute clueless for some of the questions and commands
  • Once again be sure that NO Online Course or Practice tests is going to cover all. I did ACloud Guru – LA course which covered maybe 60-70%, but hands-on or practical knowledge is MUST

Google Cloud – Associate Cloud Engineer Certification Topics

General Services

  • Cloud Billing
    • understand how Cloud Billing works. Monthly vs Threshold and which has priority
    • Budgets can be set to alert for projects
    • how to change a billing account for a project and what roles you need. Hint – Project Owner and Billing Administrator for the billing account
    • Cloud Billing can be exported to BigQuery and Cloud Storage
  • Resource Manager
    • Understand Resource Manager the hierarchy Organization -> Folders -> Projects -> Resources
    • IAM Policy inheritance is transitive and resources inherit the policies of all of their parent resources.
    • Effective policy for a resource is the union of the policy set on that resource and the policies inherited from higher up in the hierarchy.
  • Cloud SDK
    • understand gcloud commands esp. when dealing with
      • configurations i.e. gcloud config
        • activate profiles – gcloud config configurations activate
        • GKE setting default cluster i.e. gcloud config set container/cluster CLUSTER_NAME
        • set project gcloud config set project mygcp-demo
        • set region gcloud config set compute/region us-west1
        • set zone gcloud config set compute/zone us-west1-a
      • Get project list and ids gcloud projects list
      • Auth i.e gcloud auth
        • Auth login using user gcloud auth login
        • Auth login using service accountgcloud auth activate-service-account --key-file=sa_key.json
      • deployment manager i.e. gcloud deployment-manager
      • VPC firewalls i.e. gcloud compute firewall-rules

Network Services

  • Virtual Private Cloud
    • Understand Virtual Private Cloud (VPC), subnets and host applications within them Hint VPC spans across region
    • Understand how Firewall rules works and how they are configured. Hint – Focus on Network Tags. Also, there are 2 implicit firewall rules – default ingress deny and default egress allow
    • Understand VPC Peering and Shared VPC
    • Understand the concept internal and external IPs and difference between static and ephemeral IPs
    • Primary IP range of an existing subnet can be expanded by modifying its subnet mask, setting the prefix length to a smaller number.
  • Cloud Load Balancing

Identity Services

  • Identity and Access Management – IAM 
    • Identify and Access Management – IAM provides administrators the ability to manage cloud resources centrally by controlling who can take what action on specific resources.
    • Understand how IAM works and how rules apply esp. the hierarchy from Organization -> Folder -> Project -> Resources
    • Understand the difference between Primitive, Pre-defined and Custom roles and their use cases
    • IAM Policy inheritance is transitive and resources inherit the policies of all of their parent resources.
    • Effective policy for a resource is the union of the policy set on that resource and the policies inherited from higher up in the hierarchy.
    • Basically  Permissions -> Roles -> (IAM Policy) -> Members
    • Need to know and understand the roles for the following services atleast
      • Cloud Storage – Admin vs Creator vs Viewer
      • Compute Engine – Admin vs Instance Admin
      • Spanner – Viewer vs Database User
      • BigQuery – User vs JobUser
    • Know how to copy roles to different projects or organization. Hint – gcloud iam roles copy
    • Know how to use service accounts with applications
  • Cloud Identity
    • Cloud Identity provides IDaaS (Identify as a Service) and provides single sign-on functionality and federation with external identity provides like Active Directory.

Compute Services

  • Make sure you know all the compute services Google Compute Engine, Google App Engine and Google Kubernetes Engine, they are heavily covered in the exam.
  • Google Compute Engine
    • Google Compute Engine is the best IaaS option for compute and provides fine grained control
    • Know how to create a Compute Engine instance, connect to it using Cloud shell or ssh keys
    • Difference between backups and images and how to create instances from the same.
    • Instance templates with managed instance groups. Instance template cannot be edited, create a new one and attach.
    • Difference between managed vs unmanaged instance groups and auto-healing feature
    • Preemptible VMs and their use cases. HINT – can be terminated any time and supports max 24 hours.
    • Upgrade an instance without downtime using Live Migration
    • Managing access using OS Login or project and instance metadata
    • Prevent accidental deletion using deletion protection flag
    • In case of any issues or errors, how to debug the same
  • Google App Engine
    • Google App Engine is mainly the best option for PaaS with platforms supported and features provided.
    • Deploy an application with App Engine and understand how versioning and rolling deployments can be done
    • Understand how to keep auto scaling and traffic splitting and migration.
    • Know App Engine is a regional resource and understand the steps to migrate or deploy application to different region and project.
    • Know the difference between App Engine Flexible vs Standard
  • Google Kubernetes Engine
    • Google Container Engine is now officially Google Kubernetes Engine and the questions refer to the same
    • Google Kubernetes Engine, powered by the open source container scheduler Kubernetes, enables you to run containers on Google Cloud Platform.
    • Kubernetes Engine takes care of provisioning and maintaining the underlying virtual machine cluster, scaling your application, and operational logistics such as logging, monitoring, and cluster health management.
    • Be sure to Create a Kubernetes Cluster and configure it to host an application
    • Understand how to make the cluster auto repairable and upgradable. Hint – Node auto-upgrades and auto-repairing feature
    • Very important to understand where to use gcloud commands (to create a cluster) and kubectl commands (manage the cluster components)
    • Very important to understand how to increase cluster size and enable autoscaling for the cluster
    • know how to manage secrets like database passwords

Storage Services

  • Understand each storage service options and their use cases.
  • Cloud Storage
    • Cloud Storage is cost-effective object storage for unstructured data.
    • very important to know the different storage classes and their use cases esp. Regional and Multi-Regional (frequent access), Nearline (monthly access) and Coldline (yearly access)
    • Understand life cycle management. HINT – Changes are in accordance to object creation date
    • Understand Signed URL to give temporary access and the users do not need to be GCP users
    • Understand access control and permissions – IAM vs ACLs (fine grained control)
    • Understand best practices esp. uploading and downloading the data. HINT using parallel composite uploads
  • Relational Databases
    • Cloud SQL
      • Cloud SQL is a fully-managed service that provides MySQL, PostgreSQL and MS SQL Server
      • limited to 10TB 64TB and is a regional service.
      • Difference between Failover and Read replicas. Failover provides High Availability and almost zero downtime while Read replicas provide scalability. Cross region Read Replicas are supported
      • Perform Point-In-Time recovery. Hint – requires binary logging and backups
    • Cloud Spanner
      • is a fully managed, mission-critical relational database service.
      • provides a scalable online transaction processing (OLTP) database with high availability and strong consistency at global scale.
      • globally distributed and can scale and handle more than 10TB.
      • not a direct replacement and would need migration
    • There are no direct options for Microsoft SQL Server or Oracle yet.
  • Data Warehousing
    • BigQuery
      • provides scalable, fully managed enterprise data warehouse (EDW) with SQL and fast ad-hoc queries.
      • Remember it is most suitable for historical analysis.
      • know how to perform a preview or dry run. Hint – price is determined by bytes read not bytes returned.
      • supports federated tables or external tables that can support Cloud Storage, BigTable, Google Drive and Cloud SQL.

Data Services

  • Although there were only a couple of reference of big data services in the exam, it is important to know (DO NOT DEEP DIVE) the Big Data stack (esp. IoT gateway, Pub/Sub, Bigtable vs BigQuery) to understand which service fits the different layers of ingest, store, process, analytics, use
    • Cloud Storage as the medium to store data as data lake
    • Cloud Pub/Sub as the messaging service to capture real time data esp. IoT
    • Cloud Pub/Sub is designed to provide reliable, many-to-many, asynchronous messaging between applications esp. real time IoT data capture
    • Cloud Dataflow to process, transform, transfer data and the key service to integrate store and analytics.
    • Cloud BigQuery for storage and analytics. Remember BigQuery provides the same cost-effective option for storage as Cloud Storage
    • Cloud Dataprep to clean and prepare data. Hint – It can be used anomaly detection.
    • Cloud Dataproc to handle existing Hadoop/Spark jobs. Hint – Use it to replace existing hadoop infra.
    • Cloud Datalab is an interactive tool for exploration, transformation, analysis and visualization of your data on Google Cloud Platform

Monitoring

  • Google Cloud Monitoring or Stackdriver
    • provides everything from monitoring, alert, error reporting, metrics, diagnostics, debugging, trace.
    • remember audits are mainly checking Stackdriver

DevOps services

  • Deployment Manager 
  • Google Marketplace (Cloud Launcher)
    • provides a way to launch common software packages e.g. Jenkins or WordPress and stacks on Google Compute Engine with just a few clicks like a prepackaged solution.
    • It can help minimize deployment time and can be used without any knowledge about the product

Google Cloud – Associate Cloud Engineer Certification Resources

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Learning Path

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Learning Path

AWS Certified SysOps Administrator – Associate (SOA-C01) exam is the latest AWS exam and has already replaced the old SysOps Administrator – Associate exam from 24th Sept 2018. It basically validates

  • Deploy, manage, and operate scalable, highly available, and fault tolerant systems on AWS
  • Implement and control the flow of data to and from AWS
  • Select the appropriate AWS service based on compute, data, or security requirements
  • Identify appropriate use of AWS operational best practices
  • Estimate AWS usage costs and identify operational cost control mechanisms
  • Migrate on-premises workloads to AWS

Refer AWS Certified SysOps – Associate Exam Guide Sep 18

AWS Certified SysOps Administrator - Associate Content Outline

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Summary

  • AWS Certified SysOps Administrator – Associate exam is quite different from the previous one with more focus on the error handling, deployment, monitoring.
  • AWS Certified SysOps Administrator – Associate exam covers a lot of latest AWS services like ALB, Lambda, AWS Config, AWS Inspector, AWS Shield while focusing majorly on other services like CloudWatch, Metrics from various services, CloudTrail.
  • Be sure to cover the following topics
    •  Monitoring & Management Tools
      • Understand CloudWatch monitoring to provide operational transparency
        • Know which EC2 metrics it can track (disk, network, CPU, status checks) and which would need custom metrics (memory, disk swap, disk storage etc.)
        • Know ELB monitoring
          • Classic Load Balancer metrics SurgeQueueLength and SpilloverCount
          • Reasons for 4XX and 5XX errors
      • Understand CloudTrail for audit and governance
      • Understand AWS Config and its use cases
      • Understand AWS Systems Manager and its various services like parameter store, patch manager
      • Understand AWS Trusted Advisor and what it provides
      • Very important to understand AWS CloudWatch vs AWS CloudTrail vs AWS Config
      • Very important to understand Trust Advisor vs Systems manager vs Inspector
      • Know Personal Health Dashboard & Service Health Dashboard
      • Deployment tools
        • Know AWS OpsWorks and its ability to support chef & puppet
        • Know Elastic Beanstalk and its advantages
        • Understand AWS CloudFormation
          • Know stacks, templates, nested stacks
          • Know how to wait for resources setup to be completed before proceeding esp. cfn-signal
          • Know how to retain resources (RDS, S3), prevent rollback in case of a failure
    • Networking & Content Delivery
      • Understand VPC in depth
        • Understand the difference between
          • Bastion host – allow access to instances in private subnet
          • NAT – route traffic from private subnets to internet
          • NAT instance vs NAT Gateway
          • Internet Gateway – Access to internet
          • Virtual Private Gateway – Connectivity between on-premises and VPC
          • Egress-Only Internet Gateway – relevant to IPv6 only to allow egress traffic from private subnet to internet, without allowing ingress traffic
        • Understand
        • Understand how VPC Peering works and limitations
        • Understand VPC Endpoints and supported services
        • Ability to debug networking issues like EC2 not accessible, EC2 instances not reachable, Instances in subnets not able to communicate with others or Internet.
      • Understand Route 53 and Routing Policies and their use cases
        • Focus on Weighted, Latency routing policies
      • Understand VPN and Direct Connect and their use cases
      • Understand CloudFront and use cases
      • Understand ELB, ALB and NLB and what features they provide like
        • ALB provides content and path routing
        • NLB provides ability to give static IPs to load balancer.
    • Compute
      • Understand EC2 in depth
        • Understand EC2 instance types
        • Understand EC2 purchase options esp. spot instances and improved reserved instances options.
        • Understand how IO Credits work and T2 burstable performance and T2 unlimited
        • Understand EC2 Metadata & Userdata. Whats the use of each? How to look up instance data after it is launched.
        • Understand EC2 Security. 
          • How IAM Role work with EC2 instances
          • IAM Role can now be attached to stopped and runnings instances
        • Understand AMIs and remember they are regional and how can they be shared with others.
        • Troubleshoot issues with launching EC2 esp. RequestLimitExceeded, InstanceLimitExceeded etc.
        • Troubleshoot connectivity, lost ssh keys issues
      • Understand Auto Scaling
      • Understand Lambda and its use cases
      • Understand Lambda with API Gateway
    • Storage
    • Databases
    • Security
      • Understand IAM as a whole
      • Understand KMS for key management and envelope encryption
      • Understand CloudHSM and KMS vs CloudHSM esp. support for symmetric and asymmetric keys
      • Know AWS Inspector and its use cases
      • Know AWS GuardDuty as managed threat detection service. Will help eliminate as the option
      • Know AWS Shield esp. the Shield Advanced option and the features it provides
      • Know WAF as Web Traffic Firewall
      • Know AWS Artifact as on-demand access to compliance reports
    • Integration Tools
      • Understand SQS as message queuing service and SNS as pub/sub notification service
        • Focus on SQS as a decoupling service
        • Understand SQS FIFO, make sure you know the differences between standard and FIFO
      • Understand CloudWatch integration with SNS for notification
    • Cost management

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Resources

AWS Cloud Computing Whitepapers

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Contents

Domain 1: Monitoring and Reporting

  1. Create and maintain metrics and alarms utilizing AWS monitoring services
  1. Recognize and differentiate performance and availability metrics
  2. Perform the steps necessary to remediate based on performance and availability metrics

Domain 2: High Availability

  1. Implement scalability and elasticity based on use case
  2. Recognize and differentiate highly available and resilient environments on AWS

Domain 3: Deployment and Provisioning

  1. Identify and execute steps required to provision cloud resources
  2. Identify and remediate deployment issues

Domain 4: Storage and Data Management

  1. Create and manage data retention
  2. Identify and implement data protection, encryption, and capacity planning needs

Domain 5: Security and Compliance

  1. Implement and manage security policies on AWS
  1. Implement access controls when using AWS
  2. Differentiate between the roles and responsibility within the shared responsibility model

Domain 6: Networking

  1. Apply AWS networking features
  1. Implement connectivity services of AWS
  2. Gather and interpret relevant information for network troubleshooting

Domain 7: Automation and Optimization

  1. Use AWS services and features to manage and assess resource utilization
  2. Employ cost-optimization strategies for efficient resource utilization
  3. Automate manual or repeatable process to minimize management overhead

AWS Certified Developer – Associate DVA-C01 Exam Learning Path

AWS Certified Developer – Associate DVA-C01 Exam Learning Path

AWS Certified Developer – Associate DVA-C01 exam is the latest AWS exam and would replace the old Developer – Associate exam. It basically validates

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices.
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications using AWS.

Refer AWS Certified Developer – Associate (Released June 2018) Exam Blue Print

AWS Certified Developer - Associate June 2018 Domains

AWS Certified Developer – Associate DVA-C01 Summary

  • AWS Certified Developer – Associate DVA-C01 exam is quite different from the previous one with more focus on the hands-on development and deployment concepts rather then just the architectural concepts
  • AWS Certified Developer – Associate DVA-C01 exam covers a lot of latest AWS services like Lambda, X-Ray while focusing majorly on other services like DynamoDB, Elastic Beanstalk, S3, EC2

AWS Developer – Associate DVA-C01 Exam Resources

AWS Developer – Associate DVA-C01 Exam Topics

  • Be sure to cover the following topics
    • Compute
      • Understand what AWS services you can use to build a serverless architecture?
      • Make sure you know and understand Lambda and serverless architecture, its features and use cases.
      • Know Lambda limits for e.g. execution time, deployable zipped and unzipped package limit
      • Be sure to know how to deploy, package using Lambda.
      • Understand tracing of Lambda functions using X-Ray
      • Understand integration of Lambda with CloudWatch.
      • Understand how to handle multiple releases using Alias
      • Know AWS Step Functions to manage Lambda functions flow
      • Understand Lambda with API Gateway
      • Understand API Gateway stages, ability to cater to different environments for e.g. dev, test, prod
      • Understand EC2 as a whole
      • Understand EC2 Metadata & Userdata. Whats the use of each? How to look up instance data after it is launched.
      • Understand EC2 Security. How IAM Role work with EC2 instances.
      • Understand how does EC2 evaluates the order of credentials, when multiple are provided. Remember the order – Environment variables -> Java system properties -> Default credential profiles file -> ECS container credentials -> Instance Profile credentials
      • Know Elastic Beanstalk at a high level, what it provides and its ability to get an application running quickly
      • Understand Elastic Beanstalk configurations and deployment types with their advantages and disadvantages
    • Databases
      • Understand relational and NoSQLs data storage options which include RDS, DynamoDB and their use cases
      • Understand DynamoDB Secondary Indexes
      • Make sure you understand DynamoDB provisioned throughput for Read/Writes and its calculations
      • Make sure you understand DynamoDB Consistency Model – difference between Strongly Consistent and Eventual Consistency
      • Understand DynamoDB with its low latency performance, DAX
      • Know how to configure fine grained security for DynamoDB table, items, attributes
      • Understand DynamoDB Best Practices regarding
        • table design
        • provisioned throughput
        • Query vs Scan operations
        • improving Scan operation performance
      • Understand RDS features – Read Replicas for scalability, Multi-AZ for High Availability
      • Know ElastiCache use cases, mainly for caching performance
      • Understand ElastiCache Redis vs Memcached
    • Storage
      • Understand S3 storage option
      • Understand S3 Best Practices to improve performance for GET/PUT requests
      • Understand S3 features like different storage classes with lifecycle policies, static website hosting, versioning, Pre-Signed URLs for both upload and download, CORS
    • Security
      • Understand IAM as a whole
      • Focus on IAM role and its use case especially with EC2 instance
      • Know how to test and validate IAM policies
      • Understand IAM identity providers and federation and use cases
      • Understand how AWS Cognito works and what features it provides
      • Understand MFA and How would implement two factor authentication for your application
      • Understand KMS for key management and envelope encryption
      • Know what services support KMS
        • Remember SQS, Kinesis now provides SSE support
      • Focus on S3 with SSE, SSE-C, SSE-KMS. How they work and differ?
      • Know how can you enforce only buckets to only accept encrypted objects
      • Know various KMS encryption options encrypt, reencrypt, generateEncryptedDataKey etc
      • Know how KMS impacts the performance of the services
    • Management Tools
      • Understand CloudWatch monitoring to provide operational transparency
      • Know which EC2 metrics it can track.
      • Understand CloudWatch is extendable with custom metrics
      • Understand CloudTrail for Audit
    • Integration Tools
      • Understand SQS as message queuing service and SNS as pub/sub notification service
      • Understand SQS features like visibility, long poll vs short poll
      • Focus on SQS as a decoupling service
      • AWS has released SQS FIFO, make sure you know the differences between standard and FIFO
      • Know the different development and deployment tools like CodeCommit, CodeBuild, CodeDeploy, CodePipeline
    • Networking
      • Does not cover much on networking or designing of networks, but be sure you understand VPC, Subnets, Routes, Security Groups etc.

AWS Cloud Computing Whitepapers

AWS Certified Developer – Associate DVA-C01 Exam Contents

Domain 1: Deployment

  1. Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
  1. Deploy applications using Elastic Beanstalk.
  1. Prepare the application deployment package to be deployed to AWS.
  2. Deploy serverless applications.

Domain 2: Security

  1. Make authenticated calls to AWS services.
  1. Implement encryption using AWS services.
  2. Implement application authentication and authorization.

Domain 3: Development with AWS Services

  1. Write code for serverless applications.
  1. Translate functional requirements into application design.
  1. Implement application design into application code.
  2. Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.

Domain 4: Refactoring

  1. Optimize application to best use AWS services and features.
  2. Migrate existing application code to run on AWS.

Domain 5: Monitoring and Troubleshooting

  1. Write code that can be monitored.
  2. Perform root cause analysis on faults found in testing or production.