AWS Systems Manager Overview

AWS Systems Manager

  • provides visibility and control of the infrastructure on AWS
  • helps to view operational data from multiple AWS services and automate operational tasks across AWS resources.
  • works with managed instances, which are configured for use with Systems Manager
  • helps configure and maintain managed instances.
  • helps maintain security and compliance by scanning the managed instances and reporting on (or taking corrective action on) any policy violations it detects.
  • supports machine types include EC2 instances, on-premises servers, and virtual machines (VMs), including VMs in other cloud environments. Supported operating system types include Windows Server, multiple distributions of Linux, and Raspbian.

Systems Manager Capabilities

Operations Management

Capabilities that help manage the AWS resources

  • Trusted Advisor is an online tool that provides you real time guidance to help you provision your resources following AWS best practices
  • AWS Personal Health Dashboard provides information about AWS Health events that can affect your account
  • OpsCenter provides a central location where operations engineers and IT professionals can view, investigate, and resolve operational work items (OpsItems) related to AWS resources

Actions & Change

Capabilities for taking action against or changing the AWS resources

Systems Manager Automation

  • helps automate common maintenance and deployment tasks for e.g. create and update AMIs, apply driver and agent updates, reset passwords on Windows instance, reset SSH keys on Linux instances, and apply OS patches or application updates.

Maintenance Windows

  •  helps set up recurring schedules for managed instances to run administrative tasks like installing patches and updates without interrupting business-critical operations.

Instances & Nodes

Capabilities for managing the EC2 instances, on-premises servers and virtual machines (VMs) in the hybrid environment, and other types of AWS resources (nodes)

Systems Manager Configuration Compliance

  • helps scan fleet of managed instances for patch compliance and configuration inconsistencies.
  • helps collect and aggregate data from multiple AWS accounts and Regions, and then drill down into specific resources that aren’t compliant.
  • provides, by default, displays compliance data about Patch Manager patching and State Manager associations, but can be customized

Session Manager

  • helps manage EC2 instances through an interactive one-click browser-based shell or through the AWS CLI.
  • provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
  • helps comply with corporate policies that require controlled access to instances, strict security practices, and fully auditable logs with instance access details, while still providing end users with simple one-click cross-platform access to the EC2 instances.

Systems Manager Run Command

  • helps to remotely and securely manage the configuration of the managed instances at scale.
  • helps perform on-demand changes like updating applications or running Linux shell scripts and Windows PowerShell commands on a target set of dozens or hundreds of instances.

Patch Manager

  • helps automate process of patching managed instances with both security related and other types of updates.
  • helps apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for Microsoft applications.)
  • enables scanning of instances for missing patches and applies them individually or to large groups of instances by using EC2 instance tags.
  • uses patch baselines, which can include rules for auto-approving patches within days of their release, as well as a list of approved and rejected patches.
  • helps install security patches on a regular basis by scheduling patching to run as a Systems Manager maintenance window task.

Systems Manager Inventory

  • provides visibility into your Amazon EC2 and on-premises computing environment
  • collect metadata from the managed instances about applications, files, components, patches, and more on your managed instances

Systems Manager State Manager

  • helps automate the process of keeping the managed instances in a defined state.
  • helps ensure that the instances are bootstrapped with specific software at startup, joined to a Windows domain (Windows instances only), or patched with specific software updates.

Shared Resources

Capabilities for managing and configuring the AWS resources

Systems Manager document (SSM document)

  • defines the actions that Systems Manager performs.
  • SSM document types include 
    • Command documents, which are used by State Manager and Run Command, and 
    • Automation documents, which are used by Systems Manager Automation.

Parameter Store

  • provides secure, hierarchical storage for configuration data and secrets management.
  • can store data such as passwords, database strings, and license codes as parameter values.
  • supports values as plain text or encrypted data, referenced by using the specified unique name

Systems Manager Agent

  • is software that can be installed and configured on an EC2 instance, an on-premises server, or a virtual machine (VM)
  • makes it possible for Systems Manager to update, manage, and configure these resources
  • must be installed on each instance to use with Systems Manager
  • usually comes preinstalled with lot of Amazon Machine Images (AMIs), while it must be installed manually on other AMIs, and on on-premises servers and virtual machines for your hybrid environment

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Which of the following tools from AWS allows the automatic collection of software inventory from EC2 instances and helps apply OS patches?
    1. AWS Code Deploy 
    2. Systems Manager
    3. EC2 AMI’s
    4. AWS Code Pipeline
  2. A Developer is writing several Lambda functions that each access data in a common RDS DB instance. They must share a connection string that contains the database credentials, which are a secret. A company policy requires that all secrets be stored encrypted. Which solution will minimize the amount of code the Developer must write?
    1. Use common DynamoDB table to store settings
    2. Use AWS Lambda environment variables
    3. Use Systems Manager Parameter Store secure strings
    4. Use a table in a separate RDS database
  3. A company has a fleet of EC2 instances and needs to remotely execute scripts for all of the instances. Which Amazon EC2 systems Manager feature allows this?
    1. Systems Manager Automation
    2. Systems Manager Run Command
    3. Systems Manager Parameter Store
    4. Systems Manager Inventory
  4. As a part of compliance check it was found that EC2 instances launched by the deployment team were not in compliance to latest security patches. The team had all tagged the resources. Which AWS service can help make the instances complaint?
    1. AWS Inspector
    2. AWS GuardDuty
    3. AWS Systems Manager
    4. AWS Shield

References

AWS Certified Solution Architect – Professional (SAP-C01) Exam Learning Path

AWS Certified Solutions Architect – Professional (SAP-C01) Exam Learning Path

AWS Certified Solutions Architect – Professional (SAP-C01) exam is the upgraded pattern of the previous Solution Architect – Professional exam which was released last year (2018) and upgraded this year. I recently passed the latest pattern and difference is quite a lot between the previous pattern and the latest pattern. The amount of overlap between the associates and professional exams and even the Solutions Architect and DevOps has drastically reduced.

AWS Certified Solutions Architect – Professional (SAP-C01) exam basically validates

  • Design and deploy dynamically scalable, highly available, fault-tolerant, and reliable applications on AWS
  • Select appropriate AWS services to design and deploy an application based on given requirements
  • Migrate complex, multi-tier applications on AWS
  • Design and deploy enterprise-wide scalable operations on AWS
  • Implement cost-control strategies

Refer to AWS Certified Solutions Architect – Professional Exam Guide

AWS Certified Solutions Architect – Professional (SAP-C01) Exam Summary

  • AWS Certified Solutions Architect – Professional (SAP-C01) exam was for a total of 170 minutes but it had 75 questions. The questions and answers options are quite long and there is a lot of reading that needs to be done, so be sure you are prepared and manage your time well. As always, mark the questions for review and move on and come back to them after you are done with all.
  • One of the key tactic I followed when solving any question was to read the question and use paper and pencil to draw a rough architecture and focus on the areas that you need to improve. Trust me, you will be able eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach to the right answer or atleast have a 50% chance of getting it right.
  • AWS Certified Solutions Architect – Professional (SAP-C01) focuses a lot on concepts and services related to Scalability, High Availability, Disaster Recovery, Migration, Security and Cost Control.
  • Be sure to cover the following topics
    • Analytics
      • Understand Kinesis
        • Understand the difference between Kinesis Data Streams and Kinesis Firehose
      • Know Amazon Elasticsearch provides a managed solution
    • Integration Tools
      • Understand SQS in terms of loose coupling and scaling.
      • Know how CloudWatch integration with SNS and Lambda can help in notification

AWS Certified Solutions Architect – Professional (SAP-C01) Exam Resources

AWS Organizations Service Control Policies – Certification

AWS Organizations Service Control Policies

  • are one type of policy that help manage the organization.
  • offers central control over the maximum available permissions for all accounts in your organization, ensuring member accounts stay within the organization’s access control guidelines
  • are available only in an organization that has all features enabled
  • are NOT sufficient for granting access in the accounts in the organization.
  • defines a guardrail for what actions accounts within the organization root or OU can do, but IAM policies need to be attached to the users and roles in the organization’s accounts to grant permissions to them
  • with an SCP attached to member accounts, identity-based and resource-based policies grant permissions to entities only if those policies and the SCP allow the action

Effects on Permissions

  • SCP never grants permissions
  • limits permissions for entities in member accounts, including each AWS account root user
  • does not limit actions performed by the master account.
  • does not affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can’t be restricted by SCPs.
  • affect only principals that are managed by accounts that are part of the organization. They don’t affect users or roles from accounts outside the organization
  • Users and roles must still be granted permissions with appropriate IAM permission policies. A user without any IAM permission policies has no access at all, even if the applicable SCPs allow all services and all actions.

Strategies for Using SCPs

  • By default, an SCP named FullAWSAccess is attached to every root, OU, and account, which allows all actions and all services.
  • Blacklist Strategy
    • actions are allowed by default, and specify what services and actions are prohibited
    • blacklist permissions using deny statements can be assigned in combination with the default FullAWSAccess SCP
    • using deny statements in SCPs require less maintenance, because they don’t need to updated when AWS adds new services.
    • deny statements usually use less space, thus making it easier to stay within SCP size limits.
  • Whitelist Strategy
    • actions are prohibited by default, and you specify what services and actions are allowed
    • whitelist permissions can be assigned, by removing the default FullAWSAccess SCP
    • allows SCP that explicitly permits only those allowed services and actions

Testing Effects of SCPs

  • don’t attach SCPs to the root of the organization without thoroughly testing the impact that the policy has on accounts.
  • Create an OU that the accounts can be moved into one at a time, or at least in small numbers, to ensure that users are not inadvertently locked out of key services.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your company is planning on setting up multiple accounts in AWS. The IT Security department has a requirement to ensure that certain services and actions are not allowed across all accounts. How would the system admin achieve this in the most EFFECTIVE way possible?
    1. Create a common IAM policy that can be applied across all accounts
    2. Create an IAM policy per account and apply them accordingly​
    3. Deny the services to be used across accounts by contacting AWS​ support
    4. Use AWS Organizations and Service Control Policies
  2. You are in the process of implementing AWS Organizations for your company. At your previous company, you saw an Organizations implementation go bad when an SCP (Service Control Policy) was applied at the root of the organization before being thoroughly tested. In what way can an SCP be properly tested and implemented?
    1. Back up your entire Organization to S3 and restore rollback and restore if something goes wrong
    2. The SCP must be verified with AWS before it is implemented to avoid any problems.
    3. Mirror your Organizational Unit in another region. Apply the SCP and test it. Once testing is complete, attach the SCP to the root of your organization.
    4. Create an Organizational Unit (OU). Attach the SCP to this new OU. Move your accounts in one at a time to ensure that you don’t inadvertently lock users out of key services.

AWS Cloud Migration – Certification

AWS Cloud Migration

Some of the key drivers to moving to cloud is

  • Operational Costs – Key components of operational costs are unit price of infrastructure, the ability to match supply and demand, finding a pathway to optionality, employing an elastic cost base, and transparency
  • Workforce Productivity – getting up and ready in seconds and various service availability.
  • Cost Avoidance – eliminating the need for hardware refresh programs and constant maintenance programs
  • Operational Resilience – increases resilience and thereby reducing organization’s risk profile
  • Business Agility – react to market conditions more quickly 

Cloud Stages of Adoption

Cloud Stages of Adoption

PROJECT

  • In the project phase, execute projects to get familiar and experience benefits from the cloud.

FOUNDATION

  • After experiencing the benefits of cloud, build the foundation to scale the cloud adoption.
  • This includes creating a landing zone (a pre-configured, secure, multi-account AWS environment), Cloud Center of Excellence (CCoE), operations model, as well as assuring security and compliance readiness.

MIGRATION

  • Migrate existing applications including mission-critical applications or entire data centers to the cloud as you scale your adoption across a growing portion of the IT portfolio. 

REINVENTION

  • Now that the operations are in the cloud, focus on reinvention by taking advantage of the flexibility and capabilities of AWS to transform business by speeding time to market and increasing the attention on innovation.

Migration Process

Migration Process

Phase 1: Migration Preparation and Business Planning

  • Determine the right objectives and begin to get an idea of the types of benefits you will see.
  • Starts with some foundational experience and developing a preliminary business case for a migration, which requires taking objectives into account, along with the age and architecture of the existing applications, and their constraints.

Phase 2: Portfolio Discovery and Planning

  • Understand the IT portfolio, the dependencies between applications, and begin to consider what types of migration strategies needed to meet the business case objectives.
  • With the portfolio discovery and migration approach, you are in a good position to build a full business case.

Phase 3 & Phase 4: Designing, Migrating, and Validating Application

  • Move focus from the portfolio level to the individual application level and design, migrate, and validate each application.
  • Each application is designed, migrated, and validated according to one of the six common application strategies (“The 6 R’s”).
  • Once you have some foundational experience from migrating a few apps and a plan in place that the organization can get behind – it’s time to accelerate the migration and achieve scale.
  • AWS provides migration services that help for moving applications and data from on-premises to AWS – AWS Server Migration Service (SMS)AWS Database Migration Service (DMS)

Phase 5: Operate

  • Once applications are migrated, iterate on the new foundation, turn off old systems, and constantly iterate toward a modern operating model.
  • Operating model becomes an evergreen set of people, process, and technology that constantly improves as you migrate more applications.

Application Migration Strategies

Migration strategies depend upon what is in your environment and the what is suitable for the portfolio, taking into account the business and technical requirements.

Below are the Six common migration strategies employed and build upon “The 5 R’s” that Gartner outlined in 2011.

Application Migration Strategies

1. Rehost (“lift and shift”)

  • Moving your application as is to the Cloud.
  • helps to quickly implement the migration and scale to meet a business case
  • provides better opportunity for re-architect the applications once they are already running in cloud, with the organization having already developed cloud skills and the application with its data is migrated and handling traffic.
  • Rehosting can be automated with tools such as AWS Server Migration Service, or can be done manually

2. Replatform (“lift, tinker and shift”)

  • Moving your application to the Cloud with optimizations, without any major changes.
  • Replatform helps achieve some tangible benefit without changing the core architecture of the application. For e.g., using RDS for database or Elastic Beanstalk for applications.

3. Repurchase (“drop and shop”)

  • Dropping the application and Moving to a complete new Solution
  • More of an Buy in a Build vs Buy model, might be expensive in short team but faster time to market.
  • Move to a different product, which likely means the organization is willing to change the existing used licensing model

4. Refactor / Re-architect

  • Moving the application to Cloud, with major changes.
  • More of a Build in a Build vs Buy model, and would take time.
  • driven by a strong business need to add features, scale, or performance with agility and improvement in business continuity that would otherwise be difficult to achieve in the application’s existing environment.

5. Retire

  • Decommission the applications, not needed anymore.
  • Identifying IT assets that are no longer useful and can be turned off will help boost your business case and direct your attention towards maintaining the resources that are widely used.

6. Retain

  • Keep the applications as is in the current environment
  • Retain portions of the IT portfolio, which have tight dependencies, difficult, not in priority or ready for migration

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company is planning the migration of several lab environments used for software testing. An assortment of custom tooling is used to manage the test runs for each lab. The labs use immutable infrastructure for the software test runs, and the results are stored in a highly available SQL database cluster. Although completely rewriting the custom tooling is out of scope for the migration project, the company would like to optimize workloads during the migration. Which application migration strategy meets this requirement?
    1. Re-host
    2. Re-platform
    3. Re-factor/re-architect
    4. Retire

References

AWS Certified Security – Speciality (SCS-C01) Exam Learning Path

I recently cleared the AWS Certified Security – Speciality (SCS-C01) with a score of 939/1000. If compared with the Advanced Networking – Speciality exam, the Security – Speciality was not as tough mainly cause it covers features and services which you would have used in your day to day working on AWS or services which have a clear demarcation of their purpose.

AWS Certified Security – Speciality (SCS-C01) exam is the focusing on the AWS Security and Compliance concepts. It basically validates

  • An understanding of specialized data classifications and AWS data protection mechanisms.
  • An understanding of data-encryption methods and AWS mechanisms to implement them.
  • An understanding of secure Internet protocols and AWS mechanisms to implement them.
  • A working knowledge of AWS security services and features of services to provide a secure production environment.
  • Competency gained from two or more years of production deployment experience using AWS security services and features.
  • The ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements. An understanding of security operations and risks

Refer to AWS Certified Security – Speciality Exam Guide

AWS Certified Security – Speciality (SCS-C01) Exam Summary

  • AWS Certified Security – Speciality exam, as its name suggests, covers a lot of Security and compliance concepts for VPC, EBS, S3, IAM, KMS services
  • One of the key tactic I followed when solving any AWS exam is to read the question and use paper and pencil to draw a rough architecture and focus on the areas that you need to improve. Trust me, you will be able eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach to the right answer or atleast have a 50% chance of getting it right.
  • Be sure to cover the following topics
    • Security, Identity & Compliance
      • Make sure you know all the services and deep dive into IAM, KMS.
      • Identity and Access Management (IAM)
      • Deep dive into Key Management Service (KMS). There would be quite a few questions on this.
      • Understand AWS Cognito esp. User Pools
      • Know AWS GuardDuty as managed threat detection service
      • Know AWS Inspector as automated security assessment service that helps improve the security and compliance of applications deployed on AWS
      • Know Amazon Macie as a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS
      • Know AWS Artifact as a central resource for compliance-related information that provides on-demand access to AWS’ security and compliance reports and select online agreements
      • Know AWS Certificate Manager (ACM) for certificate management. (hint : To use an ACM Certificate with Amazon CloudFront, you must request or import the certificate in the US East (N. Virginia) region)
      • Know Cloud HSM as a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud
      • Know AWS Secrets Manager to protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle
      • Know AWS Shield esp. the Shield Advanced option and the features it provides
      • Know WAF as Web Traffic Firewall – (Hint – WAF can be attached to your CloudFront, Application Load Balancer, API Gateway to dynamically detect and prevent attacks)
    • Networking & Content Delivery
      • Understand VPC
        • Understand VPC Endpoints esp. services supported by Gateway and Interface Endpoints. Interface Endpoints are also called Private Links. (hint: application endpoints can be exposed using private links)
        • Understand VPC Flow Logs to capture information about the IP traffic going to and from network interfaces in the VPC (hint: can help in port scans but not in packet inspection)
      • Know Virtual Private Network & Direct Connect to establish connectivity a secured, low latency access between on-premises data center and AWS VPC
      • Understand CloudFront esp. with S3 (hint: Origin Access Identity to restrict direct access to S3 content)
      • Know Elastic Load Balancer at high level esp. End to End encryption.
    • Management & Governance Tools
      • Understand AWS CloudWatch for Logs and Metrics. Also, CloudWatch Events more real time alerts as compared to CloudTrail
      • Understand CloudTrail for audit and governance (hint: CloudTrail can be enabled for all regions at one go and supports log file integrity validation)
      • Understand AWS Config and its use cases (hint: AWS Config rules can be used to alert for any changes and Config can be used to check the history of changes. AWS Config can also help check approved AMIs compliance)
      • Understand CloudTrail provides the WHO and Config provides the WHAT.
      • Understand Systems Manager
        • Systems Manager provide parameter store which can used to manage secrets (hint: using Systems Manager is cheaper than Secrets manager for storage if limited usage)
        • Systems Manager provides agent based and agentless mode. (hint: agentless does not track process)
        • Systems Manager Patch Manager helps select and deploy operating system and software patches automatically across large groups of EC2 or on-premises instances
        • Systems Manager Run Command provides safe, secure remote management of your instances at scale without logging into the servers, replacing the need for bastion hosts, SSH, or remote PowerShell
      • Understand AWS Organizations to control what member account can do. (hint: can also control the root accounts)
      • Know AWS Trusted Advisor
    • Storage
    • Compute
      • Know EC2 access to services using IAM Role and Lambda using Execution role.
    • Integration Tools
      • Know how CloudWatch integration with SNS and Lambda can help in notification (Topics are not required to be in detail)
    • Whitepapers and articles

AWS Certified Security – Speciality (SCS-C01) Exam Resources

AWS Certified DevOps Engineer – Professional (DOP-C01) Exam Learning Path

AWS Certified DevOps Engineer – Professional (DOP-C01) Exam Learning Path

AWS Certified DevOps Engineer – Professional (DOP-C01) exam is the upgraded pattern of the DevOps Engineer – Professional exam which was released last year (2018). I recently attempted the latest pattern and AWS has done quite good in improving it further, as compared to the old one, to include more DevOps related questions and services.

AWS Certified DevOps Engineer – Professional (DOP-C01) exam basically validates

  • Implement and manage continuous delivery systems and methodologies on AWS
  • Implement and automate security controls, governance processes, and compliance validation
  • Define and deploy monitoring, metrics, and logging systems on AWS
  • Implement systems that are highly available, scalable, and self-healing on the AWS platform
  • Design, manage, and maintain tools to automate operational processes

Refer to AWS Certified DevOps Engineer – Professional Exam Guide

AWS Certified DevOps Engineer – Professional (DOP-C01) Exam Summary

  • AWS Certified DevOps Engineer – Professional exam was for a total of 170 minutes but it had 75 questions (I was always assuming it to be 65) and I just managed to complete the exam with 20 mins remaining. So be sure you are prepared and manage your time well. As always, mark the questions for review and move on and come back to them after you are done with all.
  • One of the key tactic I followed when solving the DevOps Engineer questions was to read the question and use paper and pencil to draw a rough architecture and focus on the areas that you need to improve. Trust me, you will be able eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach to the right answer or atleast have a 50% chance of getting it right.
  • AWS Certified DevOps Engineer – Professional exam covers a lot of concepts and services related to Automation, Deployments, Disaster Recovery, HA, Monitoring, Logging and Troubleshooting. It also covers security and compliance related topics.
  • Be sure to cover the following topics
    • Monitoring & Governance tools
      • Very important to understand AWS CloudWatch vs AWS CloudTrail vs AWS Config
      • Very important to understand Trust Advisor vs Systems manager vs AWS Inspector
      • Know Personal Health Dashboard & Service Health Dashboard
      • CloudWatch
      • CloudTrail
        • Understand how to maintain CloudTrail logs integrity
      • Understand AWS Config and its use cases (hint : Config maintains history and can be used to revert the config)
      • Know Personal Health Dashboard (hint : it tracks events on your AWS resources)
      • Understand AWS Trusted Advisor and what it provides (hint : low utilization resources)
      • Systems Manager
        • Systems Manager is also covered heavily in the exams so be sure you know
        • Understand AWS Systems Manager and its various services like parameter store, patch manager
    • Networking & Content Delivery
      • Networking is covered very lightly. Usually the questions are targetted towards Troubleshooting of access or permissions.
      • Know VPC
      • Route 53
    • Security, Identity & Compliance
    • Storage
      • Exam does not cover Storage services in deep
      • Focus on Simple Secure Service (S3)
        • Understand S3 Permissions (Hint – acl authenticated users provides access to all authenticated users. How to control access)
        • Know S3 disaster recovery across region. (hint : cross region replication)
        • Know CloudFront for caching to improve performance
      • Elastic Block Store
        • Focus mainly on EBS Backup using snapshots for HA and Disaster recovery
    • Database
    • Compute
      • Know EC2
        • Understand ENI for HA, user data, pre-baked AMIs for faster instance start times
        • Amazon Linux 2 Image (hint : it allows for replication of Amazon Linux behavior in on-premises)
        • Snapshot and sharing
      • Auto Scaling
        • Auto Scaling Lifecycle events
        • Blue/green deployments with Auto Scaling – With new launch configurations, new auto scaling groups or CloudFormation update policies.
      • Understand Lambda
      • ECS
        • Know Monitoring and deployments with image update
    • Integration Tools
      • Know how CloudWatch integration with SNS and Lambda can help in notification (Topics are not required to be in detail)

AWS Certified DevOps Engineer – Professional (DOP-C01) Exam Resources

AWS Certified Advanced Networking – Speciality (ANS-C00) Exam Learning Path

I recently cleared the AWS Certified Advanced Networking – Speciality (ANS-C00), which was my first, enroute my path to the AWS Speciality certifications. Frankly, I feel the time I gave for preparation was still not enough, but I just about managed to get through. So a word of caution, this exam is inline or more tough than the professional exam especially for the reason that the Networking concepts it covers are not something you can get your hands dirty with easily.

AWS Certified Advanced Networking – Speciality (ANS-C00) exam is the focusing on the AWS Networking concepts. It basically validates

  • Design, develop, and deploy cloud-based solutions using AWS
    Implement core AWS services according to basic architecture best practices
  • Design and maintain network architecture for all AWS services
  • Leverage tools to automate AWS networking tasks

Refer to AWS Certified Advanced Networking – Speciality Exam Guide

AWS Certified Advanced Networking – Speciality (ANS-C00) Exam Summary

  • AWS Certified Advanced Networking – Speciality exam covers a lot of Networking concepts like VPC, VPN, Direct Connect, Route 53, ALB, NLB.
  • One of the key tactic I followed when solving the DevOps Engineer questions was to read the question and use paper and pencil to draw a rough architecture and focus on the areas that you need to improve. Trust me, you will be able eliminate 2 answers for sure and then need to focus on only the other two. Read the other 2 answers to check the difference area and that would help you reach to the right answer or atleast have a 50% chance of getting it right.
  • Be sure to cover the following topics
    • Networking & Content Delivery
      • You should know everything in Networking.
      • Understand VPC in depth
      • Virtual Private Network to establish connectivity between on-premises data center and AWS VPC
      • Direct Connect to establish connectivity between on-premises data center and AWS VPC and Public Services
        • Make sure you understand Direct Connect in detail, without this you cannot clear the exam
        • Understand Direct Connect connections – Dedicated and Hosted connections
        • Understand how to create a Direct Connect connection (hint: LOA-CFA provides the details for partner to connect to AWS Direct Connect location)
        • Understand virtual interfaces options – Private Virtual Interface for VPC resources and Public Virtual Interface for Public resources
        • Understand setup Private and Public VIF
        • Understand Route Propagation, propagation priority, BGP connectivity
        • Understand High Availability options based on cost and time i.e. Second Direct Connect connection OR VPN connection
        • Understand Direct Connect Gateway – it provides a way to connect to multiple VPCs from on-premises data center using the same Direct Connect connection
      • Route 53
        • Understand Route 53 and Routing Policies and their use cases Focus on Weighted, Latency routing policies
        • Understand Route 53 Split View DNS to have the same DNS to access a site externally and internally
      • Understand CloudFront and use cases
      • Load Balancer
        • Understand ELB, ALB and NLB 
        • Understand the difference ELB, ALB and NLB esp. ALB provides Content, Host and Path based Routing while NLB provides the ability to have static IP address
        • Know how to design VPC CIDR block with NLB (Hint – minimum number of IPs required are 8)
        • Know how to pass original Client IP to the backend instances (Hint – X-Forwarded-for and Proxy Protocol)
      • Know WorkSpaces requirements and setup
    • Security
      • Know AWS GuardDuty as managed threat detection service
      • Know AWS Shield esp. the Shield Advanced option and the features it provides
      • Know WAF as Web Traffic Firewall – (Hint – WAF can be attached to your CloudFront, Application Load Balancer, API Gateway to dynamically detect and prevent attacks)

AWS Certified Advanced Networking – Speciality (ANS-C00) Exam Resources

AWS Network Connectivity Options

VPC

More details @ Virtual Private Cloud

Internet Gateway

  • An Internet Gateway provides Internet connectivity to VPC
  • Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet.
  • Internet Gateway imposes no availability risks or bandwidth constraints on your network traffic.
  • An Internet gateway serves two purposes: to provide a target in the VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have not been assigned public IPv4 addresses.
  • An internet gateway supports IPv4 and IPv6 traffic.

NAT Gateway

  • NAT Gateway enables instances in a private subnet to connect to the internet (for example, for software updates) or other AWS services, but prevent the internet from initiating connections with the instances.
  • A NAT gateway forwards traffic from the instances in the private subnet to the internet or other AWS services, and then sends the response back to the instances.
  • When traffic goes to the internet, the source IPv4 address is replaced with the NAT device’s address and similarly, when the response traffic goes to those instances, the NAT device translates the address back to those instances’ private IPv4 addresses.

Egress Only Internet Gateway

  • NAT devices are not supported for IPv6 traffic, use an Egress-only Internet gateway instead
  • Egress-only Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in the VPC to the Internet, and prevents the Internet from initiating an IPv6 connection with your instances.

VPC Endpoints

  • VPC endpoint provides a private connection from VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
  • Instances in the VPC do not require public IP addresses to communicate with resources in the service. Traffic between the VPC and the other service does not leave the Amazon network.
  • VPC Endpoints are virtual devices and are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in the VPC and services without imposing availability risks or bandwidth constraints on the network traffic.
  • VPC Endpoints are of two types
    • Interface Endpoints – is an elastic network interface with a private IP address that serves as an entry point for traffic destined to supported services.
    • Gateway Endpoints – is a gateway that is a target for a specified route in your route table, used for traffic destined to a supported AWS service. Currently only Amazon S3 and DynamoDB.

More details @ VPC Endpoints

VPC Peering

  • VPC peering connection enables networking connection between two VPCs to route traffic between them using private IPv4 addresses or IPv6 addresses
  • VPC peering connections can be created between your own VPCs, or with a VPC in another AWS account
  • VPC peering connections can be created across regions, referred to as inter-region VPC peering connection
  • VPC peering uses existing underlying AWS infrastructure; it is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware.
  • VPC Peering does not have a single point of failure for communication or a bandwidth bottleneck.
  • VPC Peering connections have limitations
    • Can be used with Overlapping CIDR blocks
    • Does not provide Transitive peering
    • Doe not support Edge to Edge routing through Gateway or private connection

More details @ VPC Peering

VPN CloudHub

VPC CloudHub
  • AWS VPN CloudHub allows you to securely communicate from one site to another using AWS Managed VPN or Direct Connect
  • AWS VPN CloudHub operates on a simple hub-and-spoke model that can be used with or without a VPC
  • AWS VPN CloudHub can be used if you have multiple branch offices and existing internet connections and would like to implement a convenient, potentially low cost hub-and-spoke model for primary or backup connectivity between these remote offices.
  • AWS VPN CloudHub leverages VPC virtual private gateway with multiple gateways, each using unique BGP autonomous system numbers (ASNs).

Transit VPC

Transit VPC
  • A transit VPC is a common strategy for connecting multiple, geographically disperse VPCs and remote networks in order to create a global network transit center.
  • A transit VPC simplifies network management and minimizes the number of connections required to connect multiple VPCs and remote networks
  • Transit VPC can be used to support important use cases
    • Private Networking – You can build a private network that spans two or more AWS Regions.
    • Shared Connectivity – Multiple VPCs can share connections to data centers, partner networks, and other clouds.
    • Cross-Account AWS Usage – The VPCs and the AWS resources within them can reside in multiple AWS accounts.
  • Transit VPC design helps implement more complex routing rules, such as network address translation between overlapping network ranges, or to add additional network-level packet filtering or inspection

Transit Gateway (Virtual Routing and Forwarding)

  • Transit gateway enables you to attach VPCs (across accounts) and VPN connections in the same Region and route traffic between them
  • Transit gateways support dynamic and static routing between attached VPCs and VPN connections
  • Transit gateway removes the need for using full mesh VPC Peering and Transit VPC

Virtual Private Network (VPN)

VPC Managed VPN Connection
  • VPC provides the option of creating an IPsec VPN connection between remote customer networks and their VPC over the internet
  • AWS managed VPN endpoint includes automated multi–data center redundancy & failover built into the AWS side of the VPN connection
  • AWS managed VPN consists of two parts
    • Virtual Private Gateway (VPG) on AWS side
    • Customer Gateway (CGW) on the on-premises data center
  • AWS Managed VPN only provides Site-to-Site VPN connectivity. It does not provide Point-to-Site VPC connectivity for e.g. from Mobile
  • Virtual Private Gateway are Highly Available as it represents two distinct VPN endpoints, physically located in separate data centers to increase the availability of the VPN connection.
  • High Availability on the on-premises data center must be handled by creating additional Customer Gateway.
  • AWS Managed VPN connections are low cost, quick to setup and start with compared to Direct Connect. However, they are not reliable as they traverse through Internet.

More details @ Virtual Private Network

Software VPN

  • VPC offers the flexibility to fully manage both sides of the VPC connectivity by creating a VPN connection between your remote network and a software VPN appliance running in your VPC network.
  • Software VPNs help manage both ends of the VPN connection either for compliance purposes or for leveraging gateway devices that are not currently supported by Amazon VPC’s VPN solution.
  • Software VPNs allows you to handle Point-to-Site connectivity
  • Software VPNs, with the above design, introduces a single point of failure and needs to be handled.

Direct Connect

  • AWS Direct Connect helps establish a dedicated connection and a private connectivity from an on-premises network to VPC
  • Direct Connect can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based or VPN connections
  • Direct Connect uses industry-standard VLANs to access EC2 instances running within a VPC using private IP addresses
  • Direct Connect lets you establish
    • Dedicated Connection: A 1G or 10G physical Ethernet connection associated with a single customer through AWS.
    • Hosted Connection: A 1G or 10G physical Ethernet connection that an AWS Direct Connect Partner provisions on behalf of a customer.
  • Direct Connect provides following Virtual Interfaces
    • Private virtual interface – to access an VPC using private IP addresses.
    • Public virtual interface – to access all AWS public services using public IP addresses.
    • Transit virtual interface – to access one or more transit gateways associated with Direct Connect gateways.
  • Direct Connect connections are not redundant as each connection consists of a single dedicated connection between ports on your router and an Amazon router
  • Direct Connect High Availability can be configured using
    • Multiple Direct Connect connections
    • Back-up IPSec VPN connection

More details @ Direct Connect

LAGs

  • Direct Connect link aggregation group (LAG) is a logical interface that uses the Link Aggregation Control Protocol (LACP) to aggregate multiple connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection.
  • LAGs needs the following
    • All connections in the LAG must use the same bandwidth.
    • You can have a maximum of four connections in a LAG. Each connection in the LAG counts towards your overall connection limit for the Region.
    • All connections in the LAG must terminate at the same AWS Direct Connect endpoint.

Direct Connect Gateway

  • Direct Connect Gateway allows you to connect an AWS Direct Connect connection to one or more VPCs in your account that are located in the same or different regions
  • Direct Connect gateway can be created in any public region and accessed from all other public regions
  • Direct Connect gateway CANNOT be used to connect to a VPC in another account.
  • Alternatively, Direct connect locations can also access the public resources in any AWS Region using a public virtual interface.

Direct Connect with VPN

  • AWS Direct Connect plus VPN provides an IPsec-encrypted private connection that also reduces network costs, increases bandwidth throughput, and provides a more consistent network experience than internet-based VPN connections.

References

AWS CloudFormation Best Practices – Certification

AWS CloudFormation Best Practices

  • AWS CloudFormation Best Practices are based on real-world experience from current AWS CloudFormation customers
  • AWS CloudFormation Best Practices help provide guidelines on
    • how to plan and organize stacks,
    • create templates that describe resources and the software applications that run on them,
    • and manage stacks and their resources

Required Mainly for Developer, SysOps Associate & DevOps Professional Exam

Planning and Organizing

Organize Your Stacks By Lifecycle and Ownership

  • Use the lifecycle and ownership of the AWS resources to help you decide what resources should go in each stack.
  • By grouping resources with common lifecycles and ownership, owners can make changes to their set of resources by using their own process and schedule without affecting other resources.
  • For e.g. Consider an Application using Web and Database instances. Both the Web and Database have a different lifecycle and usually the ownership lies with different teams. Maintaining both in a single stack would need communication and co-ordination between different teams introducing complexity. It would be best to have different stacks owned by the respective teams, so that they can update their resources without impacting each others’s stack.

Use Cross-Stack References to Export Shared Resources

  • With multiple stacks, there is usually a need to refer values and resources across stacks.
  • Use cross-stack references to export resources from a stack so that other stacks can use them
  • Stacks can use the exported resources by calling them using the Fn::ImportValue function.
  • For e.g. Web stack would always need resources from the Network stack like VPC, Subnets etc.

Use IAM to Control Access

  • Use IAM to control access to
    • what AWS CloudFormation actions users can perform, such as viewing stack templates, creating stacks, or deleting stacks
    • what actions CloudFormation can perform on resources on their behalf
  • Remember, having access to CloudFormation does not provide user with access to AWS resources. That needs to be provided separately.
  • To separate permissions between a user and the AWS CloudFormation service, use a service role. AWS CloudFormation uses the service role’s policy to make calls instead of the user’s policy.

Verify Quotas for All Resource Types

  • Ensure that stack can create all the required resources without hitting the AWS account limits.

Reuse Templates to Replicate Stacks in Multiple Environments

  • Reuse templates to replicate infrastructure in multiple environments
  • Use parameters, mappings, and conditions sections to customize and make templates reusable
  • for e.g. creating the same stack in development, staging and production environment with different instance types, instance counts etc.

Use Nested Stacks to Reuse Common Template Patterns

  • Nested stacks are stacks that create other stacks.
  • Nested stacks separate out the common patterns and components to create dedicated templates for them, preventing copy pasting across stacks.
  • for e.g. a standard load balancer configuration can be created as nested stack and just used by other stacks

Creating templates

Do Not Embed Credentials in Your Templates

  • Use input parameters to pass in sensitive information such as DB password whenever you create or update a stack.
  • Use the NoEcho property to obfuscate the parameter value.

Use AWS-Specific Parameter Types

  • For existing AWS-specific values, such as existing Virtual Private Cloud IDs or an EC2 key pair name, use AWS-specific parameter types
  • AWS CloudFormation can quickly validate values for AWS-specific parameter types before creating your stack.

Use Parameter Constraints

  • Use Parameter constraints to describe allowed input values so that CloudFormation catches any invalid values before creating a stack.
  • For e.g. constraints for database user name with min and max length

Use AWS::CloudFormation::Init to Deploy Software Applications on Amazon EC2 Instances

  • Use AWS::CloudFormation::Init resource and the cfn-init helper script to install and configure software applications on EC2 instances

Validate Templates Before Using Them

  • Validate templates before creating or updating a stack
  • Validating a template helps catch syntax and some semantic errors, such as circular dependencies, before AWS CloudFormation creates any resources.
  • During validation, AWS CloudFormation first checks if the template is valid JSON or a valid YAML. If both checks fail, AWS CloudFormation returns a template validation error.

Managing stacks

Manage All Stack Resources Through AWS CloudFormation

  • After launching the stack, any further updates should be done through CloudFormation only.
  • Doing changes outside the stack can create a mismatch between the stack’s template and the current state of the stack resources, which can cause errors if you update or delete the stack.

Create Change Sets Before Updating Your Stacks

  • Change sets provides a preview of how the proposed changes to a stack might impact the running resources before you implement them
  • CloudFormation doesn’t make any changes to the stack until you execute the change set, allowing you to decide whether to proceed with the proposed changes or create another change set.

Use Stack Policies

  • Stack policies help protect critical stack resources from unintentional updates that could cause resources to be interrupted or even replaced
  • During a stack update, you must explicitly specify the protected resources that you want to update; otherwise, no changes are made to protected resources

Use AWS CloudTrail to Log AWS CloudFormation Calls

  • AWS CloudTrail tracks anyone making AWS CloudFormation API calls in the AWS account.
  • API calls are logged whenever anyone uses the AWS CloudFormation API, the AWS CloudFormation console, a back-end console, or AWS CloudFormation AWS CLI commands.
  • Enable logging and specify an Amazon S3 bucket to store the logs.

Use Code Reviews and Revision Controls to Manage Your Templates

  • Using code reviews and revision controls help track changes between different versions of your templates and changes to stack resources
  • Maintaining history can help revert the stack to a certain version of the template.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company has deployed their application using CloudFormation. They want to update their stack. However, they want to understand how the changes will affect running resources before implementing the updated. How can the company achieve the same?
    1. Use CloudFormation Validate Stack feature
    2. Use CloudFormation Dry Run feature
    3. Use CloudFormation Stage feature
    4. Use CloudFormation Change Sets feature
  2. You have multiple similar three-tier applications and have decided to use CloudFormation to maintain version control and achieve automation. How can you best use CloudFormation to keep everything agile and maintain multiple environments while keeping cost down?
    1. Create multiple templates in one CloudFormation stack.
    2. Combine all resources into one template for version control and automation.
    3. Use CloudFormation custom resources to handle dependencies between stacks
    4. Create separate templates based on functionality, create nested stacks with CloudFormation.
  3. You are working as an AWS DevOps admins for your company. You are in-charge of building the infrastructure for the company’s development teams using CloudFormation. The template will include building the VPC and networking components, installing a LAMP stack and securing the created resources. As per the AWS best practices what is the best way to design this template?
    1. Create a single CloudFormation template to create all the resources since it would be easier from the maintenance perspective.
    2. Create multiple CloudFormation templates based on the number of VPC’s in the environment.
    3. Create multiple CloudFormation templates based on the number of development groups in the environment.
    4. Create multiple CloudFormation templates for each set of logical resources, one for networking, and the other for LAMP stack creation.

References

Google Cloud – Associate Cloud Engineer Certification learning path

Google Cloud Certified - Associate Cloud Engineer

Google Cloud – Associate Cloud Engineer certification exam is basically for one who works day-in day-out with the Google Cloud Services. It targets an Cloud Engineer who deploys applications, monitors operations, and manages enterprise solutions. The exam makes sure it covers gamut of services and concepts. Although, the exam is not that tough and time available of 2 hours a quite plenty, if you well prepared.

Quick summary of the exam

  • Wide range of Google Cloud services and what they actually do. It focuses heavily on IAM, Compute, Storage. There is little bit of Network but hardly any data services.
  • Hands-on is a must. Covers Cloud SDK commands and Console operations that you would use for day-to-day work. If you have not worked on GCP before make sure you do lot of labs else you would be absolute clueless for some of the questions and commands
  • Tests are updated for the latest enhancements. There are no reference of Google Container Engine and everything was Google Kubernetes Engine, covers Cloud Functions, Cloud Spanner.
  • Once again be sure that NO Online Course or Practice tests is going to cover all. I did LinuxAcademy which covered maybe 60-70%, but hands-on or practical knowledge is MUST

The list of topics is quite long, but something that you need to be sure to cover are

  • General Services
    • Billing
      • understand how billing works. Monthly vs Threshold and which has priority
      • how to change a billing account for a project and what roles you need. Hint – Project Owner and Billing Administrator for the billing account
    • Cloud SDK
      • understand gcloud commands esp. when dealing with
        • configurations i.e. gcloud config
          • activate profiles or set project and accounts
        • app engine i.e gcloud iam
          • check roles
        • deployment manager i.e. gcloud deployment-manager
  • Network Services
    • Virtual Private Cloud
      • Create a Custom Virtual Private Cloud (VPC), subnets and host applications within them Hint VPC spans across region
      • Understand how Firewall rules works and how they are configured. Hint – Focus on Network Tags.
      • Understand the concept internal and external IPs and difference between static and ephemeral IPs
    • Load Balancer
  • Identity Services
    • Cloud IAM 
      • provides administrators the ability to manage cloud resources centrally by controlling who can take what action on specific resources.
      • Understand how IAM works and how rules apply esp. the hierarchy from Organization -> Folder -> Project -> Resources
      • Understand the difference between Primitive, Pre-defined and Custom roles and their use cases
      • Need to know and understand the roles for the following services atleast
        • Cloud Storage – Admin vs Creator vs Viewer
        • Compute Engine – Admin vs Instance Admin
        • Spanner – Viewer vs Database User
        • BigQuery – User vs JobUser
      • Know how to copy roles to different projects or organization. Hint – gcloud iam roles copy
      • Know how to use service accounts with applications
  • Compute Services
    • Make sure you know all the compute services Google Compute Engine, Google App Engine and Google Kubernetes Engine, they are heavily covered in the exam.
    • Google Compute Engine
      • Google Compute Engine is the best IaaS option for compute and provides fine grained control
      • Make sure you know how to create a GCE, connect to it using Cloud shell or ssh keys
      • Make sure you know the difference between backups and images and how to create the same
      • Understand how you can recreate instance in different zones and regions
      • Know difference between managed vs unmanaged instance groups and auto-healing feature
      • Understand Preemptible VMs and their use cases.
      • know how to upgrade an instance without downtime. HINT – live migration.
      • In case of any issues or errors, how to debug the same
    • Google App Engine
      • Google App Engine is mainly the best option for PaaS with platforms supported and features provided.
      • Deploy an application with App Engine and understand how versioning and rolling deployments can be done
      • Understand how to keep auto scaling and traffic splitting and migration.
      • Know App Engine is a regional resource and understand the steps to migrate or deploy application to different region and project.
    • Google Kubernetes Engine
      • Google Container Engine is now officially Google Kubernetes Engine and the questions refer to the same
      • Google Kubernetes Engine, powered by the open source container scheduler Kubernetes, enables you to run containers on Google Cloud Platform.
      • Kubernetes Engine takes care of provisioning and maintaining the underlying virtual machine cluster, scaling your application, and operational logistics such as logging, monitoring, and cluster health management.
      • Be sure to Create a Kubernetes Cluster and configure it to host an application
      • Understand how to make the cluster auto repairable and upgradable. Hint – Node auto-upgrades and auto-repairing feature
      • Very important to understand where to use gcloud commands (to create a cluster) and kubectl commands (manage the cluster components)
      • Very important to understand how to increase cluster size and enable autoscaling for the cluster
      • know how to manage secrets like database passwords
  • Storage Services
    • Understand each storage service options and their use cases.
    • Cloud Storage
      • cost-effective object storage for an unstructured data.
      • very important to know the different classes and their use cases esp. Regional and Multi-Regional (frequent access), Nearline (monthly access) and Coldline (yearly access)
      • Understand life cycle management. HINT – Changes are in accordance to object creation date
      • Understand Signed URL to give temporary access and the users do not need to be GCP users
      • Understand permissions – IAM vs ACLs (fine grained control)
    • Relational Databases
      • Know Cloud SQL and Cloud Spanner
      • Cloud SQL
        • is a fully-managed service that provides MySQL and PostgreSQL only.
        • limited to 10TB and is a regional service.
        • know the difference between Failover and Read replicas
        • know how to perform Point-In-Time recovery. Hint – required binary logging and backups
      • Cloud Spanner
        • is a fully managed, mission-critical relational database service.
        • provides a scalable online transaction processing (OLTP) database with high availability and strong consistency at global scale.
        • globally distributed and can scale and handle more than 10TB.
        • not a direct replacement and would need migration
      • There are no direct options for Microsoft SQL Server or Oracle yet.
    • Data Warehousing
      • BigQuery
        • provides scalable, fully managed enterprise data warehouse (EDW) with SQL and fast ad-hoc queries.
        • Remember it is most suitable for historical analysis.
        • know how to perform a preview or dry run. Hint – price is determined by bytes read not bytes returned.
  • Data Services
    • Although there were only a couple of reference of big data services in the exam, it is important to know (DO NOT DEEP DIVE) the Big Data stack (esp. IoT gateway, Pub/Sub, Bigtable vs BigQuery) to understand which service fits the different layers of ingest, store, process, analytics, use
      • Cloud Storage as the medium to store data as data lake
      • Cloud Pub/Sub as the messaging service to capture real time data esp. IoT
      • Cloud Pub/Sub is designed to provide reliable, many-to-many, asynchronous messaging between applications esp. real time IoT data capture
      • Cloud Dataflow to process, transform, transfer data and the key service to integrate store and analytics.
      • Cloud BigQuery for storage and analytics. Remember BigQuery provides the same cost-effective option for storage as Cloud Storage
      • Cloud Dataprep to clean and prepare data. Hint – It can be used anomaly detection.
      • Cloud Dataproc to handle existing Hadoop/Spark jobs. Hint – Use it to replace existing hadoop infra.
      • Cloud Datalab is an interactive tool for exploration, transformation, analysis and visualization of your data on Google Cloud Platform
  • Monitoring
    • Google Stackdriver
      • provides everything from monitoring, alert, error reporting, metrics, diagnostics, debugging, trace.
      • remember audits are mainly checking Stackdriver
  • DevOps services
    • Deployment Manager 
    • Cloud Launcher (Marketplace)
      • provides a way to launch common software packages e.g. Jenkins or WordPress and stacks on Google Compute Engine with just a few clicks like a prepackaged solution.
      • It can help minimize deployment time and can be used without any knowledge about the product

Resources