Google Cloud – Professional Data Engineer Certification learning path

Google Cloud – Professional Data Engineer Certification Learning Path

I just recertified on my Google Cloud Certified – Professional Data Engineer certification. The first attempt on the Data Engineer exam has already been 2 long years which lasted for 4 hours with 95 questions. Once again, similar to the other Google Cloud certification exams, the Data Engineer exam covers not only the gamut of services and concepts but also focuses on logical thinking and practical experience.

Google Cloud – Professional Cloud Data Engineer Certification Summary

  • Cloud Data Engineer exam had 50 questions to be answered in 2 hours
  • Covers a wide range of data services including machine learning, with other topics covering storage and security.
  • Exam does not cover any case studies
  • Although the exam covers the latest services, it has not been updated for Cloud Monitoring and Logging and still refers to Stackdriver.
  • Nothing much on Compute and Network is covered
  • Questions sometimes test your logical thinking rather than any concept regarding Google Cloud.
  • Hands-on is MUST, if you have not worked on GCP before make sure you do lots of labs else you would be absolutely clueless about some of the questions and commands
  • Be sure that NO Online Courses or Practice tests are going to cover all. I did Coursera, LinuxAcademy which is really vast, but hands-on or practical knowledge is MUST.

Google Cloud – Professional Cloud Data Engineer Certification Resources

Google Cloud – Professional Cloud Data Engineer Certification Topics

Data & Analytics Services

  • Obviously, there are lots and lots of data and related services
  • Google Cloud Data & Analytics Services Cheatsheet
  • Know the Big Data stack and understand which service fits the different layers of ingest, store, process, analytics
  • Cloud BigQuery
    • provides scalable, fully managed enterprise data warehouse (EDW) with SQL and fast ad-hoc queries.
    • ideal for storage and analytics.
    • provides the same cost-effective option for storage as Cloud Storage
    • understand BigQuery Security
      • use BigQuery IAM access roles to control data and querying access
      • use Authorized views to access control tables, columns within tables, and query results. HINT: Authorized views need to reside in a different dataset as compared to the source dataset.
      • support data encryption
    • understand BigQuery Best Practices including key strategy, cost optimization, partitioning, and clustering
      • use dry run to estimate costs
      • use partitioning and clustering to limit the amount of data scanned
      • using external data sources might result in query performance degradation and its better to import the data
    • Dataset location can be set ONLY at the time of its creation.
    • supports schema auto-detection for JSON and CSV files.
    • understand how BigQuery Streaming works
    • know BigQuery limitations esp. with updates and inserts
    • supports an external data source (federated data source)
      • which is a data source that can be queried directly even though the data is not stored in BigQuery.
      • offers support for querying data directly from:
        • Cloud Bigtable
        • Cloud Storage
        • Google Drive
        • Cloud SQL
      • Use Permanent table for querying an external data source multiple times
      • Use Temporary table for querying an external data source for one-time, ad-hoc queries over external data, or for extract, transform, and load (ETL) processes.
  • Cloud Bigtable
    • provides column database suitable for both low-latency single-point lookups and precalculated analytics
    • understand Bigtable is not for long term storage as it is quite expensive
    • know the differences with HBase
    • Know how to measure performance and scale
    • supports Development and Production mode. Development mode can be upgraded to production and not vice versa.
    • supports HDD and SDD storage during cluster creation. HDD can be converted to SDD by exporting the data to the new instance.
    • understand Bigtable Replication. Can be used to separate real-time and batch workloads on the same instance using application profiles.
  • Cloud Pub/Sub
    • as the messaging service to capture real-time data esp. IoT
    • is designed to provide reliable, many-to-many, asynchronous messaging between applications esp. real-time IoT data capture
    • guarantees at-least-once (but not exactly once) message delivery and can result in data duplication if the message is not ack within a defined time period.
    • how it compares to Kafka (HINT: provides only 7 days of retention vs Kafka which depends on the storage)
  • Cloud Dataflow
    • to process, transform, transfer data and the key service to integrate store and analytics.
    • know how to improve a Dataflow performance
    • understand Apache Beam features as well
      • understand PCollections, Transforms, ParDo and what they do
      • understand windowing, watermarks, triggers Hint: windowing and watermarks can be used to handle delayed messages
    • supports drain feature to finish existing jobs but stop processing new ones, usually useful for deploying incompatible breaking changes
    • canceling a job will lead to an immediate stop and in-flight data loss.
  • Cloud Dataprep
    • to clean and prepare data. It can be used for anomaly detection.
    • does not need any programming language knowledge and can be done through the graphical interface
    • be sure to know or try hands-on on a dataset
  • Cloud Dataproc
    • to handle existing Hadoop/Spark jobs
    • supports connector for BigQuery, Bigtable, Cloud Storage
    • supports Ephermal clusters and with Cloud Storage connector support the data can be stored in GCS instead of HDFS
    • you need to know how to improve the performance of the Hadoop cluster as well :). Know how to configure the Hadoop cluster to use all the cores (hint- spark executor cores) and handle out of memory errors (hint – executor memory)
    • Secondary workers can be used to scale with the below limitations
      • Processing only with no data storage
      • No secondary-worker-only clusters
      • Persistent disk size is used for local caching of data and is not available through HDFS.
    • how to install other components (hint – initialization actions)
  • Cloud Datalab
    • is an interactive tool for exploration, transformation, analysis, and visualization of your data on Google Cloud Platform
    • based on Jupyter
  • Cloud Composer
    • fully managed workflow orchestration service, based on Apache Airflow, enabling workflow creation that spans across clouds and on-premises data centers.
    • pipelines are configured as directed acyclic graphs (DAGs)
    • workflow lives on-premises, in multiple clouds, or fully within GCP.
    • provides the ability to author, schedule, and monitor the workflows in a unified manner

Identity Services

  • Cloud IAM 
    • provides administrators the ability to manage cloud resources centrally by controlling who can take what action on specific resources.
    • Understand how IAM works and how rules apply esp. the hierarchy from Organization -> Folder -> Project -> Resources
    • Understand IAM Best practices

Storage Services

  • Understand each storage service option and its use cases.
  • Cloud Storage
    • cost-effective object storage for unstructured data.
    • very important to know the different classes and their use cases esp. Regional and Multi-Regional (frequent access), Nearline (monthly access), and Coldline (yearly access)
    • Understand Signed URL to give temporary access and the users do not need to be GCP users
    • Understand permissions – IAM vs ACLs (fine-grained control)
  • Cloud SQL
    • is a fully-managed service that provides MySQL and PostgreSQL only.
    • Limited to 10TB and is a regional service.
    • No direct options for Oracle yet.
  • Cloud Spanner
    • is a fully managed, mission-critical relational database service.
    • provides a scalable online transaction processing (OLTP) database with high availability and strong consistency at a global scale.
    • globally distributed and can scale and handle more than 10TB.
    • not a direct replacement and would need migration
  • Cloud Datastore
    • provides document database for web and mobile applications. Datastore is not for analytics
    • Understand Datastore indexes and how to update indexes for Datastore

Machine Learning

  • Google expects the Data Engineer to surely know some of the Data scientists stuff
  • Understand the different algorithms
    • Supervised Learning (labeled data)
      • Classification (for e.g. Spam or Not)
      • Regression (for e.g. Stock or House prices)
    • Unsupervised Learning (Unlabelled data)
      • Clustering (for e.g. categories)
    • Reinforcement Learning
  • Know Cloud ML with Tensorflow
  • Know all the Cloud AI products which include
    • Cloud Vision
    • Cloud Natural Language
    • Cloud Speech-to-Text
    • Cloud Video Intelligence
    • Cloud Dialogflow
  • Cloud AutoML products, which can help you get started without much machine learning experience

Monitoring

  • Cloud Monitoring and Logging
    • provides everything from monitoring, alert, error reporting, metrics, diagnostics, debugging, trace.
    • remember audits are mainly checking Cloud Logging entries
    • Aggregated sink can then route log entries from the organization or folder, plus (recursively) from any contained folders, billing accounts, or projects

Security Services

Other Services

  • Storage Transfer Service 
    • allows import of large amounts of online data into Google Cloud Storage, quickly and cost-effectively. Online data is the key here as it supports AWS S3, HTTP/HTTPS, and other GCS buckets. If the data is on-premises you need to use the gsutil command
  • Transfer Appliance 
    • to transfer large amounts of data quickly and cost-effectively into Google Cloud Platform. Check for the data size and it would be always compared with Google Transfer Service or gsutil commands.
  • BigQuery Data Transfer Service
    • to integrate with third-party services and load data into BigQuery

Google Cloud – Professional Cloud Architect Certification learning path

Google Cloud - Professional Cloud Architect certificate

Google Cloud – Professional Cloud Architect Certification Learning Path

Re-certified !!!! Google Cloud – Professional Cloud Architect certification exam is one of the toughest exam I have appeared for. Even though it was recertification, the preparation level was same as the first one. The gamut of services and concepts it tests your knowledge on is really vast.

Google Cloud – Professional Cloud Architect Certification Summary

  • Has 50 questions to be answered in 2 hours.
  • Covers wide range of Google Cloud services and what they actually do.
  • includes Compute, Storage, Network and even Data services
  • Questions sometimes tests your logical thinking rather than any concept regarding Google Cloud.
  • Hands-on is a MUST, if you have not worked on GCP before make sure you do lots of labs else you would be absolute clueless for some of the questions and commands
  • Make sure you cover the case studies before hand. I got  ~15 questions (almost 5 per case study) and it can really be a savior for you in the exams.
  • Be sure that NO Online Course or Practice tests is going to cover all. I did LinuxAcademy (a bit old now) which is really vast, but hands-on or practical knowledge is MUST.

Google Cloud – Professional Cloud Architect Certification Resources

Google Cloud – Professional Cloud Architect Certification Topics

General Services

  • Cloud Billing
    • understand how Cloud Billing works. Monthly vs Threshold and which has priority
    • Budgets can be set to alert for projects
    • how to change a billing account for a project and what roles you need. Hint – Project Owner and Billing Administrator for the billing account
    • Cloud Billing can be exported to BigQuery and Cloud Storage
  • Resource Manager
    • Understand Resource Manager the hierarchy Organization -> Folders -> Projects -> Resources
    • IAM Policy inheritance is transitive and resources inherit the policies of all of their parent resources.
    • Effective policy for a resource is the union of the policy set on that resource and the policies inherited from higher up in the hierarchy.

Identity Services

  • Cloud Identity and Access Management
    • Identify and Access Management – IAM provides administrators the ability to manage cloud resources centrally by controlling who can take what action on specific resources.
    • Understand how IAM works and how rules apply esp. the hierarchy from Organization -> Folder -> Project -> Resources
    • Understand the difference between Primitive, Pre-defined and Custom roles and their use cases
    • IAM Policy inheritance is transitive and resources inherit the policies of all of their parent resources.
    • Effective policy for a resource is the union of the policy set on that resource and the policies inherited from higher up in the hierarchy.
    • Basically  Permissions -> Roles -> (IAM Policy) -> Members
    • Know how to use service accounts with applications
  • Cloud Identity
    • Cloud Identity provides IDaaS (Identity as a Service) and provides single sign-on functionality and federation with external identity provides like Active Directory.
    • Cloud Identity supports federating with Active Directory using GCDS to implement the synchronization

Compute Services

    • Make sure you know all the compute services Google Compute Engine, Google App Engine and Google Kubernetes Engine. You need to be sure to know the pros and cons and the use cases that you should use them.
    • Google Compute Engine
      • Google Compute Engine is the best IaaS option for compute and provides fine grained control
      • Know how to create a Compute Engine instance, connect to it using Cloud shell or ssh keys
      • Difference between backups and images and how to create instances from the same.
      • Understand Compute Engine Storage Options. Disk throughput and IOPS depends on type and size.
      • Understand Compute Engine Snapshots
      • Instance templates with managed instance groups provide scalability and high availability
      • Instance template cannot be edited, create a new one and attach.
      • Difference between managed vs unmanaged instance groups and auto-healing feature
      • Managed instance groups are covered heavily the exam, as they provide the key auto-scaling capability. Hint: you need to create an Instance template and associate it with Instance group
      • Understand how migration or traffic splitting with Managed instance groups works Hint – rolling updates & deployments
      • Preemptible VMs and their use cases. HINT – can be terminated any time and supports max 24 hours.
      • Upgrade an instance without downtime using Live Migration
      • Managing access using OS Login or project and instance metadata
      • Prevent accidental deletion using deletion protection flag
      •  Understand the pricing and discounts model Hint – Sustained (automatic upto 30%) vs Committed (1 to 3 yrs) discounts.
      • In case of any issues or errors, how to debug the same
    • Google App Engine
      • Google App Engine is mainly the best option for PaaS with platforms supported and features provided.
      • Deploy an application with App Engine and understand how versioning and rolling deployments can be done
      • Understand how to keep auto scaling and traffic splitting and migration.
      • Know App Engine is a regional resource and understand the steps to migrate or deploy application to different region and project.
      • Know the difference between App Engine Flexible vs Standard
    • Google Kubernetes Engine
      • Google Kubernetes Engine, powered by the open source container scheduler Kubernetes, enables you to run containers on Google Cloud Platform.
      • Kubernetes Engine takes care of provisioning and maintaining the underlying virtual machine cluster, scaling your application, and operational logistics such as logging, monitoring, and cluster health management.
      • A node pool is a subset of machines that all have the same configuration, including machine type (CPU and memory) authorization scopes. Node pools represent a subset of nodes within a cluster; a container cluster can contain one or more node pools. Hint : For adding new machine types, need to add a new node pool as existing one cannot be edited
      • Be sure to Create a Kubernetes Cluster and configure it to host an application
      • Understand how to make the cluster auto repairable and upgradable. Hint – Node auto-upgrades and auto-repairing feature
      • Very important to understand where to use gcloud commands (to create a cluster) and kubectl commands (manage the cluster components)
      • Very important to understand how to increase cluster size and enable autoscaling for the cluster
      • Know how to manage secrets like database passwords
    • Cloud Functions
      • is a lightweight, event-based, asynchronous compute solution that allows you to create small, single-purpose functions that respond to cloud events without the need to manage a server or a runtime environment.
      • Remember that Cloud Functions is serverless and scales from zero to scale and back to zero as the demand changes.

Network Services

  • Virtual Private Cloud
    • Understand Virtual Private Cloud (VPC), subnets and host applications within them Hint VPC spans across region
    • Understand how Firewall rules works and how they are configured. Hint – Focus on Network Tags. Also, there are 2 implicit firewall rules – default ingress deny and default egress allow
    • Understand VPC Peering and Shared VPC
    • Understand the concept internal and external IPs and difference between static and ephemeral IPs
    • Primary IP range of an existing subnet can be expanded by modifying its subnet mask, setting the prefix length to a smaller number.
    • Understand Private Google Access use cases
  • On-premises connectivity
    • Cloud VPN and Interconnect are 2 components which help you connect to on-premises data center.
    • Understand limitations of Cloud VPN esp. 3Gbps limit. How it can be improved with multiple tunnels.
    • Understand what are the requirements to setup Cloud VPN.
    • Cloud Router provides dynamic routing using BGP
    • Know Interconnect as the reliable high speed, low latency and dedicated bandwidth options.
  • Cloud Load Balancing (GCLB)
    • Google Cloud Load Balancing provides scaling, high availability, and traffic management for your internet-facing and private applications.
    • Understand Google Load Balancing options and their use cases esp. which is global and internal and what protocols they support.

Storage Services

  • Understand each Storage Options and use cases.
  • Persistent disks
    • attached to the Compute Engines, provide fast access however are limited in scalability, availability and scope.
    • Remember performance depends on the size of the disk
  • Cloud Storage
    • Cloud Storage is cost-effective object storage for unstructured data.
    • very important to know the different storage classes and their use cases esp. Regional and Multi-Regional (frequent access), Nearline (monthly access) and Coldline (yearly access)
    • Understand life cycle management. HINT – Changes are in accordance to object creation date
    • Understand various data encryption techniques
    • Understand Signed URL to give temporary access and the users do not need to be GCP users
    • Understand access control and permissions – IAM vs ACLs (fine grained control)
    • Understand best practices esp. uploading and downloading the data. HINT using parallel composite uploads
  • Relational Databases
    • Know Cloud SQL and Cloud Spanner
    • Cloud SQL
      • Cloud SQL is a fully-managed service that provides MySQL, PostgreSQL and MS SQL Server
      • limited to 10TB and is a regional service.
      • Difference between Failover and Read replicas. Failover provides High Availability and almost zero downtime while Read replicas provide scalability. Cross region Read Replicas are supported
      • Perform Point-In-Time recovery. Hint – requires binary logging and backups
      • MS SQL server support was added anew. Previously for HA, it required setting up SQL Server on Compute Engine, using Always On Availability Groups using Windows Failover Clustering. Place nodes in different subnets.
    • Cloud Spanner
      • is a fully managed, mission-critical relational database service.
      • provides a scalable online transaction processing (OLTP) database with high availability and strong consistency at global scale.
      • globally distributed and can scale and handle more than 10TB.
      • not a direct replacement and would need migration
    • There are no direct options for Oracle yet.
  • NoSQL
    • Know Cloud Datastore and BigTable
    • Datastore
      • provides document database for web and mobile applications. Datastore is not for analytics
      • Understand Datastore indexes and how to update indexes for Datastore
      • Can be configured Multi-regional and regional
    • Bigtable
      • provides column database suitable for both low-latency single-point lookups and precalculated analytics
      • understand Bigtable is not for long term storage as it is quite expensive
  • Data Warehousing
    • BigQuery
      • provides scalable, fully managed enterprise data warehouse (EDW) with SQL and fast ad-hoc queries.
      • Remember it is most suitable for historical analysis.
  • MemoryStore and Firebase did not feature in any of the questions

Data Services

  • Although there is a different certification for Data Engineer, the Cloud Architect does cover data services. Data services are also part of the use cases so be sure to know about them
  • Know the Big Data stack and understand which service fits the different layers of ingest, store, process, analytics, use
  • Key Services which need to be mainly covered are –
    • Cloud Storage as the medium to store data as data lake
    • Cloud Pub/Sub
      • as the messaging service to capture real time data esp. IoT
      • is designed to provide reliable, many-to-many, asynchronous messaging between applications esp. real time IoT data capture
      • Cloud Storage can generate notifications Object change notification
    • Cloud Dataflow to process, transform, transfer data and the key service to integrate store and analytics.
    • Cloud BigQuery for storage and analytics. Remember BigQuery provides the same cost-effective option for storage as Cloud Storage
    • Cloud Dataprep to clean and prepare data. Hint – It can be used anomaly detection.
    • Cloud Dataproc to handle existing Hadoop/Spark jobs. Hint – Use it to replace existing hadoop infra.
    • Cloud Datalab is an interactive tool for exploration, transformation, analysis and visualization of your data on Google Cloud Platform
  • Know standard patterns Cloud Pub/Sub -> Dataflow -> BigQuery

Monitoring

  • Google Cloud Monitoring or Stackdriver
    • provides everything from monitoring, alert, error reporting, metrics, diagnostics, debugging, trace.
    • remember audits are mainly checking Stackdriver
  • Google Cloud Logging or Stackdriver logging

DevOps services

  • Deployment Manager 
    • provides Infrastructure as Code
    • provides dynamic provisioning with templates
  • Cloud Source Repositories
    • provides source code repository with Git version control to support collaborative development
  • Container Registry
    • is a private Docker image storage system on Google Cloud Platform.
    • images stored are immutable.
  • Cloud Build
    • is a service that executes your builds on Google Cloud Platform infrastructure.
  • MarketPlace (Cloud Launcher)
    • provides a way to launch common software packages e.g. Jenkins or WordPress and stacks on Google Compute Engine with just a few clicks like a prepackaged solution.
    • can help minimize deployment time and can be used without any knowledge about the product

Security Services

  • Cloud Security Scanner 
    • is a web application security scanner that enables developers to easily check for a subset of common web application vulnerabilities in websites built on App Engine and Compute Engine.
  • Data Loss Prevention API
    • to handle sensitive data esp. redaction of PII data.
  • PCI-DSS compliant
    • GCP services are PCI-DSS complaint, however you need to make sure for the applications and hosting to be inline with PCI-DSS requirements
  • Same concept as PCI-DSS applies to GDPR as well

Other Services

  • Know various data transfer options
  • Storage Transfer Service
    • allows import of large amounts of online data into Google Cloud Storage, quickly and cost-effectively.
    • Online data is the key here as it supports AWS S3, HTTP/HTTPS and other GCS buckets.
    • for on-premises data you need to use gsutil command
  • Transfer Appliance 
    • to transfer large amounts of data quickly and cost-effectively into Google Cloud Platform.
    • Check for the data size and it would be always compared with Google Transfer Service or gsutil commands.
    • Transfer Appliance Rehydrator provides data rehydration, which is the process by to fully reconstitute the files, so that the transferred data can be accessed and used.
  • Spinnaker
    • is an open source, multi-cloud, continuous delivery platform and does appear in answer options. So be sure to know about it.
  • Jenkins
    • for Continuous Integration and Continuous Delivery.

Case Studies

Google Cloud – Dress4win Case Study

Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a web app and mobile application. The company also cultivates an active social network that connects their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemium app model. The application has grown from a few servers in the founder’s garage to several hundred servers and appliances in a colocated data center. However, the capacity of their infrastructure is now insufficient for the application’s rapid growth. Because of this growth and the company’s desire to innovate faster, Dress4Win is committing to a full migration to a public cloud.

The key here is the company wants to migrate completely to public cloud for the current infrastructures inability to scale

Solution Concept

For the first phase of their migration to the cloud, Dress4Win is moving their development and test environments. They are also building a disaster recovery site, because their current infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Key here is Dress4Win wants to move the development and test environments first. And also, they want to build a DR site for their current production site which would continue to be hosted on-premises

Executive Statement

Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a competitor could use a public cloud platform to offset their up-front investment and free them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.

Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction between 30% and 50% over our current model.

The key here is that the company wants to improve on the application scalability, efficiency (hardware sitting idle most of the time), capex cost reduction, and improve TCO over a period of time

Existing Technical Environment

The Dress4Win application is served out of a single data center location. All servers run Ubuntu LTS v16.04.

Databases:

  • MySQL. 1 server for user data, inventory, static data,
    • MySQL 5.8
    • 8 core CPUs
    • 128 GB of RAM
    • 2x 5 TB HDD (RAID 1)
  • Redis 3 server cluster for metadata, social graph, caching. Each server is:
    • Redis 3.2
    • 4 core CPUs
    • 32GB of RAM
  • MySQL server can be migrated directly to Cloud SQL, which is GCP managed relational database and supports MySQL.
  • For Redis cluster, MemoryStore can be used which is a fully-managed in-memory data store service for Redis.
  • There would be no changes required to support the same.

Compute:

  • 40 Web Application servers providing micro-services based APIs and static content.
    • Tomcat – Java
    • Nginx
    • 4 core CPUs
    • 32 GB of RAM
  • 20 Apache Hadoop/Spark servers:
    • Data analysis
    • Real-time trending calculations
    • 8 core CPUs
    • 128 GB of RAM
    • 4x 5 TB HDD (RAID 1)
  • 3 RabbitMQ servers for messaging, social notifications, and events:
    • 8 core CPUs
    • 32GB of RAM
  • Miscellaneous servers:
    • Jenkins, monitoring, bastion hosts, security scanners
    • 8 core CPUs
    • 32GB of RAM
  • Web Application servers with Java and Nginx can be supported using Compute engine, App Engine or even with Container Engine with auto scaling configured.
  • Although the core and RAM combination would need a custom machine type, the same be configured or tuned to use an existing machine type
  • Apache Hadoop/Spark servers can be easily migrated to Cloud Dataproc
  • RabbitMQ messaging service is currently not directly supported by Google Cloud and can be supported either with
    • Cloud Pub/Sub messaging – however this would need changes to the code and would not be a seamless migration
    • Use Compute engine to host the RabbitMQ servers
  • Jenkins, Bastion hosts, Security scanners can be hosted using Google Compute Engine (GCE)
  • Monitoring can be provided using Stackdriver

Storage appliances:

  • iSCSI for VM hosts
  • Fiber channel SAN – MySQL databases
    • 1 PB total storage; 400 TB available
  • NAS – image storage, logs, backups
    • 100 TB total storage; 35 TB available
  • iSCSI for VM hosts can be supported using Cloud persistent disks as it needs a block level storage
  • SAN for MySQL databases can be supported using Cloud persistent disks as it needs a block level storage. However, a single disk cannot scale to 1PB and multiple disks need to be combined to create the storage
  • NAS for image storage, logs and backups can be supported using Cloud Storage which provides unlimited storage capacity

Business Requirements

  • Build a reliable and reproducible environment with scaled parity of production.
    • can be handled by provisioning services or using GCP managed services with the same scale as on-premises resources and with Cloud Deployment Manager for creating repeatable deployments
  • Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
    • can be handled using IAM by implemented best practices like least privileges, separating dev/test/production projects to control access
  • Improve business agility and speed of innovation through rapid provisioning of new resources.
    • can be handled using Cloud Deployment Manager for repeatable and automated provisioning of resources
    • deployments of applications and new releases can be handled efficiently using rolling updates, A/B testing
  • Analyze and optimize architecture for performance in the cloud.
    • can be handled using auto scaling compute engines based on the demand
    • can be handled using Stackdriver for monitoring and fine tuning the specs

Technical Requirements

  • Easily create non-production environments in the cloud.
    • most of the services can be created using GCP managed services and the environment creation can be standardized and automated using templates and configurations
  • Implement an automation framework for provisioning resources in cloud.
    • can be handled using Cloud Deployment Manager, which provides Infrastructure as a Code service for provisioning resources in cloud.
  • Implement a continuous deployment process for deploying applications to the on-premises datacenter or cloud.
    • continuous deployments can be handled using tools like Jenkins available on both the environments
  • Support failover of the production environment to cloud during an emergency.
    • can be handled by replicating all the data to the cloud environment and ability to provision the servers quickly.
    • can be handled by using DNS to repoint from on-premises environment to cloud environment
  • Encrypt data on the wire and at rest.
    • All the GCP services, by default, provide encryption on wire and at rest. Encryption can be performed using Google provided or Custom keys
  • Support multiple private connections between the production data center and cloud environment.
    • can be handled using VPN (multiple VPNs for better performance) or dedicated Interconnect connection between the production data center and the cloud environment

References

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Learning Path

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Learning Path

AWS Certified SysOps Administrator – Associate (SOA-C01) exam is the latest AWS exam and has already replaced the old SysOps Administrator – Associate exam from 24th Sept 2018. It basically validates

  • Deploy, manage, and operate scalable, highly available, and fault tolerant systems on AWS
  • Implement and control the flow of data to and from AWS
  • Select the appropriate AWS service based on compute, data, or security requirements
  • Identify appropriate use of AWS operational best practices
  • Estimate AWS usage costs and identify operational cost control mechanisms
  • Migrate on-premises workloads to AWS

Refer AWS Certified SysOps – Associate Exam Guide Sep 18

AWS Certified SysOps Administrator - Associate Content Outline

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Summary

  • AWS Certified SysOps Administrator – Associate exam is quite different from the previous one with more focus on the error handling, deployment, monitoring.
  • AWS Certified SysOps Administrator – Associate exam covers a lot of latest AWS services like ALB, Lambda, AWS Config, AWS Inspector, AWS Shield while focusing majorly on other services like CloudWatch, Metrics from various services, CloudTrail.
  • Be sure to cover the following topics
    •  Monitoring & Management Tools
      • Understand CloudWatch monitoring to provide operational transparency
        • Know which EC2 metrics it can track (disk, network, CPU, status checks) and which would need custom metrics (memory, disk swap, disk storage etc.)
        • Know ELB monitoring
          • Classic Load Balancer metrics SurgeQueueLength and SpilloverCount
          • Reasons for 4XX and 5XX errors
      • Understand CloudTrail for audit and governance
      • Understand AWS Config and its use cases
      • Understand AWS Systems Manager and its various services like parameter store, patch manager
      • Understand AWS Trusted Advisor and what it provides
      • Very important to understand AWS CloudWatch vs AWS CloudTrail vs AWS Config
      • Very important to understand Trust Advisor vs Systems manager vs Inspector
      • Know Personal Health Dashboard & Service Health Dashboard
      • Deployment tools
        • Know AWS OpsWorks and its ability to support chef & puppet
        • Know Elastic Beanstalk and its advantages
        • Understand AWS CloudFormation
          • Know stacks, templates, nested stacks
          • Know how to wait for resources setup to be completed before proceeding esp. cfn-signal
          • Know how to retain resources (RDS, S3), prevent rollback in case of a failure
    • Networking & Content Delivery
      • Understand VPC in depth
        • Understand the difference between
          • Bastion host – allow access to instances in private subnet
          • NAT – route traffic from private subnets to internet
          • NAT instance vs NAT Gateway
          • Internet Gateway – Access to internet
          • Virtual Private Gateway – Connectivity between on-premises and VPC
          • Egress-Only Internet Gateway – relevant to IPv6 only to allow egress traffic from private subnet to internet, without allowing ingress traffic
        • Understand
        • Understand how VPC Peering works and limitations
        • Understand VPC Endpoints and supported services
        • Ability to debug networking issues like EC2 not accessible, EC2 instances not reachable, Instances in subnets not able to communicate with others or Internet.
      • Understand Route 53 and Routing Policies and their use cases
        • Focus on Weighted, Latency routing policies
      • Understand VPN and Direct Connect and their use cases
      • Understand CloudFront and use cases
      • Understand ELB, ALB and NLB and what features they provide like
        • ALB provides content and path routing
        • NLB provides ability to give static IPs to load balancer.
    • Compute
      • Understand EC2 in depth
        • Understand EC2 instance types
        • Understand EC2 purchase options esp. spot instances and improved reserved instances options.
        • Understand how IO Credits work and T2 burstable performance and T2 unlimited
        • Understand EC2 Metadata & Userdata. Whats the use of each? How to look up instance data after it is launched.
        • Understand EC2 Security. 
          • How IAM Role work with EC2 instances
          • IAM Role can now be attached to stopped and runnings instances
        • Understand AMIs and remember they are regional and how can they be shared with others.
        • Troubleshoot issues with launching EC2 esp. RequestLimitExceeded, InstanceLimitExceeded etc.
        • Troubleshoot connectivity, lost ssh keys issues
      • Understand Auto Scaling
      • Understand Lambda and its use cases
      • Understand Lambda with API Gateway
    • Storage
    • Databases
    • Security
      • Understand IAM as a whole
      • Understand KMS for key management and envelope encryption
      • Understand CloudHSM and KMS vs CloudHSM esp. support for symmetric and asymmetric keys
      • Know AWS Inspector and its use cases
      • Know AWS GuardDuty as managed threat detection service. Will help eliminate as the option
      • Know AWS Shield esp. the Shield Advanced option and the features it provides
      • Know WAF as Web Traffic Firewall
      • Know AWS Artifact as on-demand access to compliance reports
    • Integration Tools
      • Understand SQS as message queuing service and SNS as pub/sub notification service
        • Focus on SQS as a decoupling service
        • Understand SQS FIFO, make sure you know the differences between standard and FIFO
      • Understand CloudWatch integration with SNS for notification
    • Cost management

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Resources

AWS Cloud Computing Whitepapers

AWS Certified SysOps Administrator – Associate (SOA-C01) Exam Contents

Domain 1: Monitoring and Reporting

  1. Create and maintain metrics and alarms utilizing AWS monitoring services
  1. Recognize and differentiate performance and availability metrics
  2. Perform the steps necessary to remediate based on performance and availability metrics

Domain 2: High Availability

  1. Implement scalability and elasticity based on use case
  2. Recognize and differentiate highly available and resilient environments on AWS

Domain 3: Deployment and Provisioning

  1. Identify and execute steps required to provision cloud resources
  2. Identify and remediate deployment issues

Domain 4: Storage and Data Management

  1. Create and manage data retention
  2. Identify and implement data protection, encryption, and capacity planning needs

Domain 5: Security and Compliance

  1. Implement and manage security policies on AWS
  1. Implement access controls when using AWS
  2. Differentiate between the roles and responsibility within the shared responsibility model

Domain 6: Networking

  1. Apply AWS networking features
  1. Implement connectivity services of AWS
  2. Gather and interpret relevant information for network troubleshooting

Domain 7: Automation and Optimization

  1. Use AWS services and features to manage and assess resource utilization
  2. Employ cost-optimization strategies for efficient resource utilization
  3. Automate manual or repeatable process to minimize management overhead

AWS Certified Developer – Associate DVA-C01 Exam Learning Path

AWS Certified Developer – Associate DVA-C01 Exam Learning Path

AWS Certified Developer – Associate DVA-C01 exam is the latest AWS exam and would replace the old Developer – Associate exam. It basically validates

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices.
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications using AWS.

Refer AWS Certified Developer – Associate (Released June 2018) Exam Blue Print

AWS Certified Developer - Associate June 2018 Domains

AWS Certified Developer – Associate DVA-C01 Summary

  • AWS Certified Developer – Associate DVA-C01 exam is quite different from the previous one with more focus on the hands-on development and deployment concepts rather then just the architectural concepts
  • AWS Certified Developer – Associate DVA-C01 exam covers a lot of latest AWS services like Lambda, X-Ray while focusing majorly on other services like DynamoDB, Elastic Beanstalk, S3, EC2

AWS Developer – Associate DVA-C01 Exam Resources

AWS Developer – Associate DVA-C01 Exam Topics

  • Be sure to cover the following topics
    • Compute
      • Understand what AWS services you can use to build a serverless architecture?
      • Make sure you know and understand Lambda and serverless architecture, its features and use cases.
      • Know Lambda limits for e.g. execution time, deployable zipped and unzipped package limit
      • Be sure to know how to deploy, package using Lambda.
      • Understand tracing of Lambda functions using X-Ray
      • Understand integration of Lambda with CloudWatch.
      • Understand how to handle multiple releases using Alias
      • Know AWS Step Functions to manage Lambda functions flow
      • Understand Lambda with API Gateway
      • Understand API Gateway stages, ability to cater to different environments for e.g. dev, test, prod
      • Understand EC2 as a whole
      • Understand EC2 Metadata & Userdata. Whats the use of each? How to look up instance data after it is launched.
      • Understand EC2 Security. How IAM Role work with EC2 instances.
      • Understand how does EC2 evaluates the order of credentials, when multiple are provided. Remember the order – Environment variables -> Java system properties -> Default credential profiles file -> ECS container credentials -> Instance Profile credentials
      • Know Elastic Beanstalk at a high level, what it provides and its ability to get an application running quickly
      • Understand Elastic Beanstalk configurations and deployment types with their advantages and disadvantages
    • Databases
      • Understand relational and NoSQLs data storage options which include RDS, DynamoDB and their use cases
      • Understand DynamoDB Secondary Indexes
      • Make sure you understand DynamoDB provisioned throughput for Read/Writes and its calculations
      • Make sure you understand DynamoDB Consistency Model – difference between Strongly Consistent and Eventual Consistency
      • Understand DynamoDB with its low latency performance, DAX
      • Know how to configure fine grained security for DynamoDB table, items, attributes
      • Understand DynamoDB Best Practices regarding
        • table design
        • provisioned throughput
        • Query vs Scan operations
        • improving Scan operation performance
      • Understand RDS features – Read Replicas for scalability, Multi-AZ for High Availability
      • Know ElastiCache use cases, mainly for caching performance
      • Understand ElastiCache Redis vs Memcached
    • Storage
      • Understand S3 storage option
      • Understand S3 Best Practices to improve performance for GET/PUT requests
      • Understand S3 features like different storage classes with lifecycle policies, static website hosting, versioning, Pre-Signed URLs for both upload and download, CORS
    • Security
      • Understand IAM as a whole
      • Focus on IAM role and its use case especially with EC2 instance
      • Know how to test and validate IAM policies
      • Understand IAM identity providers and federation and use cases
      • Understand how AWS Cognito works and what features it provides
      • Understand MFA and How would implement two factor authentication for your application
      • Understand KMS for key management and envelope encryption
      • Know what services support KMS
        • Remember SQS, Kinesis now provides SSE support
      • Focus on S3 with SSE, SSE-C, SSE-KMS. How they work and differ?
      • Know how can you enforce only buckets to only accept encrypted objects
      • Know various KMS encryption options encrypt, reencrypt, generateEncryptedDataKey etc
      • Know how KMS impacts the performance of the services
    • Management Tools
      • Understand CloudWatch monitoring to provide operational transparency
      • Know which EC2 metrics it can track.
      • Understand CloudWatch is extendable with custom metrics
      • Understand CloudTrail for Audit
    • Integration Tools
      • Understand SQS as message queuing service and SNS as pub/sub notification service
      • Understand SQS features like visibility, long poll vs short poll
      • Focus on SQS as a decoupling service
      • AWS has released SQS FIFO, make sure you know the differences between standard and FIFO
      • Know the different development and deployment tools like CodeCommit, CodeBuild, CodeDeploy, CodePipeline
    • Networking
      • Does not cover much on networking or designing of networks, but be sure you understand VPC, Subnets, Routes, Security Groups etc.

AWS Cloud Computing Whitepapers

AWS Certified Developer – Associate DVA-C01 Exam Contents

Domain 1: Deployment

  1. Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
  1. Deploy applications using Elastic Beanstalk.
  1. Prepare the application deployment package to be deployed to AWS.
  2. Deploy serverless applications.

Domain 2: Security

  1. Make authenticated calls to AWS services.
  1. Implement encryption using AWS services.
  2. Implement application authentication and authorization.

Domain 3: Development with AWS Services

  1. Write code for serverless applications.
  1. Translate functional requirements into application design.
  1. Implement application design into application code.
  2. Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.

Domain 4: Refactoring

  1. Optimize application to best use AWS services and features.
  2. Migrate existing application code to run on AWS.

Domain 5: Monitoring and Troubleshooting

  1. Write code that can be monitored.
  2. Perform root cause analysis on faults found in testing or production.

AWS Certified Solutions Architect – Associate SAA-C01 Exam Learning Path (Obsolete)

AWS Certified Solutions Architect – Associate SAA-C01 Exam Learning Path (Obsolete)

SAA-C01 is Obsolete now, Please refer SAA-C03 Learning Path

AWS Solutions Architect – Associate SAA-C01 exam is the latest AWS exam and would replace the old CSA-Associate exam. It basically validates the ability to effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS technologies

  • Define a solution using architectural design principles based on customer requirements.
  • Provide implementation guidance based on best practices to the organization throughout the life cycle of the project.

Refer AWS_Solution_Architect_-_Associate_SAA-C01_Exam_Blue_Print

AWS Certified Solutions Architect - Associate February 2018

AWS Solutions Architect – Associate SAA-C01 Exam Summary

  • AWS has updated the exam concepts from the focus being on individual services to more building of scalable, highly available, cost-effective, performant, resilient and operational effective architecture
  • Although, most of the services covered by the the old exam are the same. There are few new additions like API Gateway, Lambda, ECS, Aurora
  • Exam surely covers the architecture aspects in deep, so you must be able to visualize the architecture, even draw them out in the exam just to understand how it would work and how different services relate.
  • Be sure to cover the following topics
    • Networking
      • Be sure to create VPC from scratch. This is mandatory.
        • Create VPC and understand whats an CIDR.
        • Create public and private subnets, configure proper routes, security groups, NACLs.
        • Create Bastion for communication with instances
        • Create NAT Gateway or Instances for instances in private subnets to interact with internet
        • Create two tier architecture with application in public and database in private subnets
        • Create three tier architecture with web servers in public, application and database servers in private.
        • Make sure to understand how the communication happens between Internet, Public subnets, Private subnets, NAT, Bastion etc.
      • Understand VPC endpoints and what services it can help interact
      • Understand difference between NAT Gateway and NAT Instance
      • Understand how NAT high availability can be achieved
      • Understand CloudFront as CDN and the static and dynamic caching it provides, what can be its origin (it can point to on-premises sources)
      • Understand Route 53 for routing, health checks and various routing policies it provides and their use cases mainly for high availability
      • Be sure to cover ELB in deep. AWS has introduced ALB and NLB and there are lot of questions on ALB
      • Understand ALB features with its ability for content based and URL based routing with support for dynamic port mapping with ECS
    • Storage
      • Understand various storage options S3, EBS, Instance store, EFS, Glacier and what are the use cases and anti patterns for each
      • Would recommend referring Storage Options whitepaper, although a bit dated 90% still holds right
      • Understand various EBS volume types and their use cases in terms of IOPS and throughput. SSD for IOPS and HDD for throughput
      • Understand Burst performance and I/O credits to handle occasional peaks
      • Understand S3 features like different storage classes with lifecycle policies, static website hosting, versioning, Pre-Signed URLs for both upload and download, CORS
      • Understand Glacier as an archival storage with various retrieval patterns
      • Glacier Expedited retrieval now allows object retrieval within mins
      • Understand Storage gateway and its different types
    • Compute
      • Understand EC2 as a whole
      • Understand Auto Scaling and ELB, how they work together to provide High Available and Scalable solution
      • Understand EC2 various purchase types – Reserved, On-demand and Spot and their use cases
      • Understand Reserved purchase types with the introduction of Scheduled and Convertible types
      • Understand Lambda and serverless architecture, its features and use cases. How do you benefit from Lambda?
      • Understand ECS with its ability to deploy containers and micro services architecture
      • Know Elastic Beanstalk at a high level, what it provides and its ability to get an application running quickly
    • Databases
      • Understand relational and NoSQLs data storage options which include RDS, DynamoDB, Aurora and their use cases
      • Aurora has been added to the exam and most of time the questions refer to Aurora given its abilities for multiple read replicas and replication of data across AZs
      • Understand S3 is not a storage option for database
      • Understand RDS features – Read Replicas for scalability, Multi-AZ for High Availability, Automated Backups, underlying volume types
      • Understand DynamoDB with its low latency performance, DAX
      • Understand DynamoDB provisioned throughput for Read/Writes
      • Know ElastiCache use cases, mainly for caching performance
    • Analytics
      • Not much in deep, but understand what the services are and what they can do
      • Understand Redshift as a business intelligence tool
      • Know Kinesis for real time data capture and analytics
      • Atleast know what AWS Glue does, so you can eliminate the answer
    • Security
      • Understand IAM as a whole
      • Focus on IAM role and its use case especially with EC2 instance
      • Understand IAM identity providers and federation and use cases
      • Understand MFA and How would implement two factor authentication for your application
      • Understand encryption services
      • Refer Disaster Recovery whitepaper, be sure you know the different recovery types with impact on RTO/RPO.
    • Management Tools
      • Understand CloudWatch monitoring to provide operational transparency
      • Know which EC2 metrics it can track. Remember, it cannot track memory and disk space/swap utilization
      • Understand CloudWatch is extendable with custom metrics
      • Understand CloudTrail for Audit
      • Have a basic understanding of CloudFormation, OpsWorks
    • Integration Tools
      • Understand SQS as message queuing service and SNS as pub/sub notification service
      • Understand SQS features like visibility, long poll vs short poll
      • Focus on SQS as a decoupling service
      • AWS has released SQS FIFO, make sure you know the differences between standard and FIFO

NOTE: I have just marked the topics inline with the AWS Exam Blue Print. So be sure to check the same, as it is updated regularly and go through Whitepapers, FAQs and Re-Invent videos.

AWS Solutions Architect – Associate SAA-C01 Exam Resources

AWS Cloud Computing Whitepapers

AWS Solutions Architect – Associate Exam Contents

Domain 1: Design Resilient Architectures

  1. Choose reliable/resilient storage.
  2. Determine how to design decoupling mechanisms using AWS services.
  3. Determine how to design a multi-tier architecture solution.
  4. Determine how to design high availability and/or fault tolerant architectures.

Domain 2: Define Performant Architectures

  1. Choose performant storage and databases.
  2. Apply caching to improve performance.
  3. Design solutions for elasticity and scalability.

Domain 3: Specify Secure Applications and Architectures

  1. Determine how to secure application tiers.
  2. Determine how to secure data.
  3. Define the networking infrastructure for a single VPC application.

Domain 4: Design Cost-Optimized Architectures

  1. Determine how to design cost-optimized storage.
  2. Determine how to design cost-optimized compute.

Domain 5: Define Operationally-Excellent Architectures

  1. Choose design features in solutions that enable operational excellence.

Amazon EMR Best Practices

Best Practices for Using Amazon EMR

Amazon has made working with Hadoop a lot easier. You can launch an EMR cluster in minutes for big data processing, machine learning, and real-time stream processing with the Apache Hadoop ecosystem. You can use the Management Console or the command line to start several nodes with ease.

The EMR pricing has now changed from pay-per-hour to pay-per-second, which results in lower costs and you no longer have to worry about the hourly boundary.

EMR makes a whole bunch of the latest versions of open source software available to you. Currently, there are 19 open source projects and new releases are made every 4 to 6 weeks, so the latest versions of the open source projects are available. This is very useful, especially for rapidly evolving open source projects such as Apache Spark where each release contains critical bug fixes and features. However, you are not forced to upgrade; a new release is made available if you choose to use it. With EMR, you can spin up a bunch of instances and you could process massive volumes of data residing on S3 at a reasonable cost.

A variety of cluster management options are supported, including YARN. You can run the following:

  • HBase
  • Presto (low latency, distributed SQL engine)
  • Spark
  • Tez
  • ganglia
  • Zeppellin
  • Notebooks
  • SQL editors

AWS Connectors

Additionally, connectors to different AWS services are also available; for example, you can use Spark to load Redshift (using the Redshift connector, which Redshift commands under the hood to get a good throughput). You can access DynamoDB for analytics applications, Sqoop to access relational data, and so on.

AWS Glue

One particularly interesting connector is AWS Glue. AWS Glue comprises three main components:

AWS Glue

  • ETL service: This lets you drag things around to create serverless ETL pipelines.
  • AWS Glue Data Catalog: This is a fully managed Hive metastore-compliant service. Earlier, the systems ran an external Hive metastore database in RDS or Aurora. This was great. If you shut down your cluster, all your metadata was persisted so you didn’t have to recreate your tables with extra durability and availability (in case something happened to your metastore with MySQL on the master node). With Glue, all that is fully managed. You have an intelligent metastore—you don’t have to write DDL to create a table; you can just make Glue crawl your data, infer what the schema is, and create those tables for you. You can also make it add partitions, which can be painful otherwise—if you are constantly updating your Hive tables, you need a process to load that partition in—Glue catalog can do it for you. It also supports a variety of complex data types.
  • Crawlers: The crawlers let you crawl the data to infer the schema.

AWS Glue is a managed service, so you spend less time monitoring. As a fully managed service, it is also responsible for replacing unhealthy nodes and autoscaling. Enabling security options in AWS Glue is pretty easy. It supports full customization and control, and you don’t have to waste time creating and configuring the cluster. In most cases, the default settings are good enough, but even if you wanted to change them or install custom components, you have root access over all the boxes, so you can make any changes you need.

Common EMR use cases

EMR

Using HBase for random access at a massive scale involves a lot of customers who are running HBase with HDFS. Now there is support for HBase using S3 object store for HFiles. Also, there is the ability to use the Read Replica HBase cluster in another AZ. Shifting to S3 can save you 50% or higher on storage costs. Instead of sizing the cluster for HDFS, they can now size it for the amount of processing power required for the HBase Region Servers. The S3 option is also good for load balancing and disaster recovery across AZs. As S3 is available across a region, you don’t have to replicate the data twice; that is, you don’t need two full HDFS clusters. Now you can set up a smaller cluster for the Read Replicas that point to the same HFiles and you can drive the read traffic through there.

Real-time and batch processing involves utilizing EMR; you can use Kinesis for pushing data to Spark. Use Spark Streaming for real-time analytics or processing data on-the-fly and then dump that data into S3. If you don’t have real-time processing use cases, then Kinesis Firehose is a great alternative too. The data can be cataloged in the Glue Data Catalog and then you can have the data accessible via a variety of different analytical engines. EMR supports several analytical engines including Hive, Tez, and Spark. Once the data is in the Data Catalog on S3, you can use Athena (serverless SQL queries), Glue ETL (serverless ETL), and Redshift Spectrum.

Data exploration with Spark using Zeppelin or Jupyter notebook allows you to arm your data scientists with a way to explore large amounts of data (instead of using one node, you can now spread the data across the cluster). This also makes it easier to move it to production.

There is a big rise in the use of Presto for ad hoc SQL queries (in combination with Athena). They approach the same thing from two different angles. Presto gives you advanced configurations and a way to build exactly what you need for your use case but you have to deal with the cluster management versus Athena where you just go to the console and start writing SQL. Now, many BI tools support Presto as well for supporting low latency dashboards. You can also perform traditional batch processing workloads using Spark.

Deep learning with GPU instances is where you can launch GPU hardware for EMR. There’s support for MxNet. You can do end-to-end data engineering work. Support for TensorFlow is coming.

Typical ML projects implement a multi-step process, including ETL, feature engineering, model training, model evaluation, model deployment, and model scoring and updates. Such pipelines need to support batch model training and real-time ML model serving. Using Apache Spark for implementing ML pipelines is very popular as it supports each step in an ML pipeline, scales for small and large jobs, good ML libraries, and has an active user base.

There are several options for deploying Spark on AWS. For example, you can use EC2 as it can support for batch/streaming, integrates with tooling, spin up/down clusters, larger/smaller clusters. Additionally, it also has support for different versions of Hadoop and Spark. However, using EC2 for Spark deployment places a huge management burden on us. Hence, EMR can be a simpler and better alternative here. It is simple to provision and you can use a wizard (and then generate the commands for the command line from it if required). You can create tags for cost management and send logs to S3.

Lowering EMR costs

If you are paying for Hadoop nodes that are not doing anything, then you are just burning money. There are ways you can batch up your workloads. Take an inventory of the jobs you have and tweak them to run in a batch mode and shutdown the cluster in-between those times. You can separate out clusters with auto-scaling instead of sizing and running it for all your workloads. You should shut down the cluster when you can, to stop paying for it unnecessarily. You can use Amazon Linux AMI with preinstalled customizations for faster cluster creation and use auto-scaling to minimize costs for long-running clusters.

AWS Auto Scaling Lifecycle

Auto Scaling Lifecycle

  • Instances launched through the Auto Scaling group have a different lifecycle than that of other EC2 instances
  • Auto Scaling lifecycle starts when the Auto Scaling group launches an instance and puts it into service.
  • Auto Scaling lifecycle ends when the instance is terminated either by the user, or the Auto Scaling group takes it out of service and terminates it
  • AWS charges for the instances as soon as they are launched, including the time it is not in InService

Auto Scaling Lifecycle Transition

Auto Scaling Group Lifecycle

Auto Scaling Lifecycle Hooks

  • Auto Scaling Lifecycle hooks enable performing custom actions by pausing instances as an Auto Scaling group launches or terminates them
  • Each Auto Scaling group can have multiple lifecycle hooks. However, there is a limit on the number of hooks per Auto Scaling group
  • Auto Scaling scale out event flow
    • Instances start in the Pending state
    • If an autoscaling:EC2_INSTANCE_LAUNCHING  lifecycle hook is added, the state is moved to Pending:Wait
    • After the lifecycle action is completed, instances enter to Pending:Proceed
    • When the instances are fully configured, they are attached to the Auto Scaling group and moved to the InService state
  • Auto Scaling scale in event flow
    • Instances are detached from the Auto Scaling group and enter the Terminating state.
    • If an autoscaling:EC2_INSTANCE_TERMINATING lifecycle hook is added, the state is moved to Terminating:Wait
    • After the lifecycle action is completed, the instances enter the Terminating:Proceed state.
    • When the instances are fully terminated, they enter the Terminated state.
  • During the scale out and scale in events, instances are put into a wait state (Pending:Wait or Terminating:Wait) and are paused until either a continue action happens or the timeout period ends.
  • By default, the instance remains in a wait state for one hour, which can be extended by restarting the timeout period by recording a heartbeat.
  • If the task finishes before the timeout period ends, the lifecycle action can be marked completed and it continues the launch or termination process.
  • After the wait period, the Auto Scaling group continues the launch or terminate process (Pending:Proceed or Terminating:Proceed)
    • CloudWatch Events target to invoke a Lambda function when a lifecycle action occurs. Event contains information about the instance that is launching or terminating and a token that can be used to control the lifecycle action.
    • Notification target (CloudWatch events, SNS, SQS) for the lifecycle hook which receives the message from EC2 Auto Scaling.The message contains information about the instance that is launching or terminating and a token that can be used to control the lifecycle action
    • Create a script that runs on the instance as the instance starts. The script can control the lifecycle action using the ID of the instance on which it runs.Custom action can be implemented using

Auto Scaling Lifecycle Hooks Considerations

  • Keeping Instances in a Wait State
    • Instances remain in a wait state for a finite period of time.
    • Default is 1 hour (3600 seconds) with the max being 48 hours or 100 times the heartbeat timeout, whichever is smaller.
    • Time can be adjusted using
      • complete-lifecycle-action (CompleteLifecycleAction) command to continue to the next state if finishes before the timeout period end
      • put-lifecycle-hook command, the –heartbeat-timeout parameter to set the heartbeat timeout for the lifecycle hook during its creation
      • Restart the timeout period by recording a heartbeat, using the record-lifecycle-action-heartbeat (RecordLifecycleActionHeartbeat) command
  • Cooldowns and Custom Actions
    • Cooldown period helps ensure that the Auto Scaling group does not launch or terminate more instances than needed
    • Cooldown period starts when the instance enters the InService state. Any suspended scaling actions resume after cooldown period expires
  • Health Check Grace Period
    • Health check grace period does not start until the lifecycle hook completes and the instance enters the InService state
  • Lifecycle Action Result
    • Result of the lifecycle hook is either ABANDON or CONTINUE
    • If the instance is launching,
      • CONTINUE indicates a successful action, and the instance can be put into service.
      • ABANDON indicates the custom actions were unsuccessful, and that the instance can be terminated.
    • If the instance is terminating,
      • ABANDON and CONTINUE allow the instance to terminate.
      • However, ABANDON stops any remaining actions from other lifecycle hooks, while CONTINUE allows them to complete
  • Spot Instances
    • Lifecycle hooks can be used with Spot Instances. However, a lifecycle hook does not prevent an instance from terminating due to a change in the Spot Price, which can happen at any time

Enter and Exit Standby

  • Instance in an InService state can be moved toStandby state.
  • Standby state enables you to remove the instance from service, troubleshoot or make changes to it, and then put it back into service.
  • Instances in a Standby state continue to be managed by the Auto Scaling group. However, they are not an active part of the application until they are put back into service.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your application is running on EC2 in an Auto Scaling group. Bootstrapping is taking 20 minutes to complete. You find out that instances are shown as InService although the bootstrapping has not completed. How can you make sure that new instances are not added until the bootstrapping has finished. Choose the correct answer:
    1. Create a CloudWatch alarm with an SNS topic to send alarms to your DevOps engineer.
    2. Create a lifecycle hook to keep the instance in pending:wait state until the bootstrapping has finished and then put the instance in pending:proceed state.
    3. Increase the number of instances in your Auto Scaling group.
    4. Create a lifecycle hook to keep the instance in standby state until the bootstrapping has finished and then put the instance in pending:proceed state.
  2. When a scale out event occurs, the Auto Scaling group launches the required number of EC2 instances using its assigned launch configuration. What instance state do these instances start in? Choose the correct answer:
    1. pending:wait
    2. InService
    3. Pending
    4. Terminating
  3. With AWS Auto Scaling, once we apply a hook and the action is complete or the default wait state timeout runs out, the state changes to what, depending on which hook we have applied and what the instance is doing? Select two. Choose the 2 correct answers:
    1. pending:proceed
    2. pending:wait
    3. terminating:wait
    4. terminating:proceed
  4. For AWS Auto Scaling, what is the first transition state an existing instance enters after leaving steady state in Standby mode?
    1. Detaching
    2. Terminating:Wait
    3. Pending (You can put any instance that is in an InService state into a Standby state. This enables you to remove the instance from service, troubleshoot or make changes to it, and then put it back into service. Instances in a Standby state continue to be managed by the Auto Scaling group. However, they are not an active part of your application until you put them back into service. Refer link)
    4. EnteringStandby
  5. For AWS Auto Scaling, what is the first transition state an instance enters after leaving steady state when scaling in due to health check failure or decreased load?
    1. Terminating (When Auto Scaling responds to a scale in event, it terminates one or more instances. These instances are detached from the Auto Scaling group and enter the Terminating state. Refer link)
    2. Detaching
    3. Terminating:Wait
    4. EnteringStandby

References

AutoScalingGroupLifecycle

 

AWS CloudWatch Logs – Certification

AWS CloudWatch Logs

  • CloudWatch Logs can be used to monitor, store, and access log files from  EC2 instances, CloudTrail, Route 53, and other sources
  • CloudWatch Logs uses the log data for monitoring in an not; so, no code changes are required
  • CloudWatch Logs require CloudWatch logs agent to be installed on the EC2 instances and on-premises servers.
  • CloudWatch Logs agent makes it easy to quickly send both rotated and non-rotated log data off of a host and into the log service.
  • An VPC endpoint can be configured to keep traffic between VPC and CloudWatch Logs from leaving the Amazon network. It doesn’t require an IGW, NAT, VPN connection, or Direct Connect connection
  • CloudWatch Logs allows exporting log data from the log groups to an S3 bucket, which can then be used for custom processing and analysis, or to load onto other systems.
  • Log data is encrypted while in transit and while it is at rest
  • Log data can be encrypted using an AWS KMS or customer master key (CMK).

Required Mainly for SysOps Associate & DevOps Professional Exam

CloudWatch Logs Concepts

Log Events

  • A log event is a record of some activity recorded by the application or resource being monitored.
  • Log event record contains two properties: the timestamp of when the event occurred, and the raw event message

Log Streams

  • A log stream is a sequence of log events that share the same source for e.g. logs events from an Apache access log on a specific host.

Log Groups

  • Log groups define groups of log streams that share the same retention, monitoring, and access control settings for e.g. Apache access logs from each host grouped through log streams into a single log group
  • Each log stream has to belong to one log group
  • There is no limit on the number of log streams that can belong to one log group.

Metric Filters

  • Metric filters can be used to extract metric observations from ingested events and transform them to data points in a CloudWatch metric.
  • Metric filters are assigned to log groups, and all of the filters assigned to a log group are applied to their log streams.

Retention Settings

  • Retention settings can be used to specify how long log events are kept in CloudWatch Logs.
  • Expired log events get deleted automatically.
  • Retention settings are assigned to log groups, and the retention assigned to a log group is applied to their log streams.

CloudWatch Logs Use cases

Monitor Logs from EC2 Instances in Real-time

  • can help monitor applications and systems using log data
  • can help track number of errors for e.g. 404, 500, for even specific literal terms “NullReferenceException”, occurring in the applications, which can then be matched to a threshold to send notification

Monitor AWS CloudTrail Logged Events

  • can be used to monitor particular API activity as captured by CloudTrail by creating alarms in CloudWatch and receive notifications

Archive Log Data

  • can help store the log data in highly durable storage, an alternative to S3
  • log retention setting can be modified, so that any log events older than this setting are automatically deleted.

Log Route 53 DNS Queries

  • can help log information about the DNS queries that Route 53 receives.

Real-time Processing of Log Data with Subscriptions

  • Subscriptions can help get access to real-time feed of logs events from CloudWatch logs and have it delivered to other services such as Kinesis stream, Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems
  • A subscription filter defines the filter pattern to use for filtering which log events get delivered to the AWS resource, as well as information about where to send matching log events to.
  • CloudWatch Logs log group can also be configured to stream data Elasticsearch Service cluster in near real-time

Searching and Filtering

  • CloudWatch Logs allows searching and filtering the log data by creating one or more metric filters.
  • Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs.
  • CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that can be put as graph or set an alarm on.

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Once we have our logs in CloudWatch, we can do a number of things such as: Choose 3. Choose the 3 correct answers:[CDOP]
    1. Send the log data to AWS Lambda for custom processing or to load into other systems
    2. Stream the log data to Amazon Kinesis
    3. Stream the log data into Amazon Elasticsearch in near real-time with CloudWatch Logs subscriptions.
    4. Record API calls for your AWS account and delivers log files containing API calls to your Amazon S3 bucket
  2. You have decided to set the threshold for errors on your application to a certain number and once that threshold is reached you need to alert the Senior DevOps engineer. What is the best way to do this? Choose 3. Choose the 3 correct answers: [CDOP]
    1. Set the threshold your application can tolerate in a CloudWatch Logs group and link a CloudWatch alarm on that threshold.
    2. Use CloudWatch Logs agent to send log data from the app to CloudWatch Logs from Amazon EC2 instances
    3. Pipe data from EC2 to the application logs using AWS Data Pipeline and CloudWatch
    4. Once a CloudWatch alarm is triggered, use SNS to notify the Senior DevOps Engineer.
  3. You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO? [CDOP]
    1. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues. (is not fast in search and introduces delay)
    2. Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc MapReduce analysis and write new queries when needed. (is not fast in search and introduces delay)
    3. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues. (is not fast in search and introduces delay)
    4. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (ELK – Elasticsearch, Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)
  4. You use Amazon CloudWatch as your primary monitoring system for your web application. After a recent software deployment, your users are getting Intermittent 500 Internal Server Errors when using the web application. You want to create a CloudWatch alarm, and notify an on-call engineer when these occur. How can you accomplish this using AWS services? (Choose three.) [CDOP]
    1. Deploy your web application as an AWS Elastic Beanstalk application. Use the default Elastic Beanstalk CloudWatch metrics to capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric.
    2. Install a CloudWatch Logs Agent on your servers to stream web application logs to CloudWatch.
    3. Use Amazon Simple Email Service to notify an on-call engineer when a CloudWatch alarm is triggered.
    4. Create a CloudWatch Logs group and define metric filters that capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric.
    5. Use Amazon Simple Notification Service to notify an on-call engineer when a CloudWatch alarm is triggered.
    6. Use AWS Data Pipeline to stream web application logs from your servers to CloudWatch.

References

AWS_CloudWatch_Logs_User_Guide

AWS OpsWorks Deployment Strategies – Certification

AWS OpsWorks Deployment Strategies

NOTE: Advanced Topic required for DevOps Professional Exam Only

All at Once Deployment

  • OpsWorks Stacks does not automatically deploy updated code to online instances, and needs to be done manually
  • Deploy command (for apps) or Update Custom Cookbooks command (for cookbooks) helps deploy the update to every instance concurrently
  • Approach is simple and fast, but leads to a downtime incase of error
  • OpsWorks allows rollback to restore previously deployed app version
  • By default, AWS OpsWorks Stacks stores the five most recent deployments, which allows you to roll back up to four versions

Rolling Deployment

  • A rolling deployment updates an application on a stack’s online application server instances in multiple phases.
  • With each phase, a subset of the online instances can be updated and verified to be successful before starting the next phase.
  • In case of any issues, the instances running the old app version can continue to handle incoming traffic until the issues are resolved.
  • Steps to perform Rolling deployment
    • Deploy the app on a single application server instance.
    • The instance can be deregistered from the load balancer, to prevent it from serving traffic
    • Verify the app is working fine
    • Deploy the update on the remainder of instances

Blue Green Deployment

  • Blue Green deployment can be achieved using separate stack for each phase of the application’s lifecycle.
  • Different stacks are sometimes referred to as environments like development, staging, production etc.
    • Blue environment is the production stack, which hosts the current application.
    • Green environment is the staging stack, which hosts the updated application.
  • Development and testing can be performed on stacks, which are not publicly accessible, and when ready the traffic can be switched.
  • Steps for Blue Green deployment with OpsWorks Stacks stacks, in conjunction with Route 53 and a pool of ELB load balancers
    • Attach unused ELB from the pool to the green stack’s application server layer
    • After all of the green stack’s instances have passed the ELB health check, the weights in Route 53 can be changed to route traffic gradually from Blue to Green stack.
    • Once the Green stack works fines and is ready to handle all traffic
    • Detach the load balancer from the old blue stack’s application server layer and return it to the pool
    • Blue stack can be retained for some time, so that if any issues the update can be rolled back by reversing the procedure to direct incoming traffic back to the old blue stack

OpsWorks Blue Green Deployment

AWS Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. You company runs a complex customer relations management system that consists of around 10 different software components all backed by the same Amazon Relational Database (RDS) database. You adopted AWS OpsWorks to simplify management and deployment of that application and created an AWS OpsWorks stack with layers for each of the individual components. An internal security policy requires that all instances should run on the latest Amazon Linux AMI and that instances must be replaced within one month after the latest Amazon Linux AMI has been released. AMI replacements should be done without incurring application downtime or capacity problems. You decide to write a script to be run as soon as a new Amazon Linux AMI is released. Which solutions support the security policy and meet your requirements? Choose 2 answers
    1. Assign a custom recipe to each layer, which replaces the underlying AMI. Use AWS OpsWorks life-cycle events to incrementally execute this custom recipe and update the instances with the new AMI. (AMI cannot be updated using recipes)
    2. Create a new stack and layers with identical configuration, add instances with the latest Amazon Linux AMI specified as a custom AMI to the new layer, switch DNS to the new stack, and tear down the old stack. (Blue-Green Deployment)
    3. Identify all Amazon Elastic Compute Cloud (EC2) instances of your AWS OpsWorks stack, stop each instance, replace the AMI ID property with the ID of the latest Amazon Linux AMI ID, and restart the instance. To avoid downtime, make sure not more than one instance is stopped at the same time. (Instances cannot be updated by updating the AMI id and needs to be launched anew)
    4. Specify the latest Amazon Linux AMI as a custom AMI at the stack level, terminate instances of the stack and let AWS OpsWorks launch new instances with the new AMI. (Would result in downtime)
    5. Add new instances with the latest Amazon Linux AMI specified as a custom AMI to all AWS OpsWorks layers of your stack, and terminate the old ones. (Disposable Rolling deployment)
  2. A company has developed a Ruby on Rails content management platform. Currently, OpsWorks with several stacks for dev, staging, and production is being used to deploy and manage the application. Now the company wants to start using Python instead of Ruby. How should the company manage the new deployment?
    1. Update the existing stack with Python application code and deploy the application using the deploy life-cycle action to implement the application code.
    2. Create a new stack that contains a new layer with the Python code. To cut over to the new stack the company should consider using Blue/Green deployment
    3. Create a new stack that contains the Python application code and manage separate deployments of the application via the secondary stack using the deploy lifecycle action to implement the application code.
    4. Create a new stack that contains the Python application code and manages separate deployments of the application via the secondary stack

References

OpsWorks Deployment Best Practices