Google Cloud – Professional Cloud DevOps Engineer Certification learning path

Google Cloud Professional Cloud DevOps Engineer Certification

Google Cloud – Professional Cloud DevOps Engineer Certification learning path

Continuing on the Google Cloud Journey, glad to have passed the 8th certification with the Professional Cloud DevOps Engineer certification. Google Cloud – Professional Cloud DevOps Engineer certification exam focuses on almost all of the Google Cloud DevOps services with Cloud Developer tools, Operations Suite, and SRE concepts.

Google Cloud -Professional Cloud DevOps Engineer Certification Summary

  • Had 50 questions to be answered in 2 hours.
  • Covers a wide range of Google Cloud services mainly focusing on DevOps toolset including Cloud Developer tools, Operations Suite with a focus on monitoring and logging, and SRE concepts.
  • The exam has been updated to use
    • Cloud Operations, Cloud Monitoring & Logging and does not refer to Stackdriver in any of the questions.
    • Artifact Registry instead of Container Registry.
  • There are no case studies for the exam.
  • As mentioned for all the exams, Hands-on is a MUST, if you have not worked on GCP before make sure you do lots of labs else you would be absolutely clueless about some of the questions and commands
  • I did Coursera and ACloud Guru which is really vast, but hands-on or practical knowledge is MUST.

Google Cloud – Professional Cloud DevOps Engineer Certification Resources

Google Cloud – Professional Cloud DevOps Engineer Certification Topics

Developer Tools

  • Google Cloud Build
    • Cloud Build integrates with Cloud Source Repository, Github, and Gitlab and can be used for Continous Integration and Deployments.
    • Cloud Build can import source code, execute build to the specifications, and produce artifacts such as Docker containers or Java archives
    • Cloud Build can trigger builds on source commits in Cloud Source Repositories or other git repositories.
    • Cloud Build build config file specifies the instructions to perform, with steps defined to each task like the test, build and deploy.
    • Cloud Build step specifies an action to be performed and is run in a Docker container.
    • Cloud Build supports custom images as well for the steps
    • Cloud Build integrates with Pub/Sub to publish messages on build’s state changes.
    • Cloud Build can trigger the Spinnaker pipeline through Cloud Pub/Sub notifications.
    • Cloud Build should use a Service Account with a Container Developer role to perform deployments on GKE
    • Cloud Build uses a directory named /workspace as a working directory and the assets produced by one step can be passed to the next one via the persistence of the /workspace directory.
  • Binary Authorization and Vulnerability Scanning
    • Binary Authorization provides software supply-chain security for container-based applications. It enables you to configure a policy that the service enforces when an attempt is made to deploy a container image on one of the supported container-based platforms.
    • Binary Authorization uses attestations to verify that an image was built by a specific build system or continuous integration (CI) pipeline.
    • Vulnerability scanning helps scan images for vulnerabilities by Container Analysis.
    • Hint: For Security and compliance reasons if the image deployed needs to be trusted, use Binary Authorization
  • Google Source Repositories
    • Cloud Source Repositories are fully-featured, private Git repositories hosted on Google Cloud.
    • Cloud Source Repositories can be used for collaborative, version-controlled development of any app or service
    • Hint: If the code needs to be versioned controlled and needs collaboration with multiple members, choose Git related options
  • Google Container Registry/Artifact Registry
    • Google Artifact Registry supports all types of artifacts as compared to Container Registry which was limited to container images
    • Container Registry is not referred to in the exam
    • Artifact Registry supports both regional and multi-regional repositories
  • Google Cloud Code
    • Cloud Code helps write, debug, and deploy the cloud-based applications for IntelliJ, VS Code, or in the browser.
  • Google Cloud Client Libraries
    • Google Cloud Client Libraries provide client libraries and SDKs in various languages for calling Google Cloud APIs.
    • If the language is not supported, Cloud Rest APIs can be used.
  • Deployment Techniques
    • Recreate deployment – fully scale down the existing application version before you scale up the new application version.
    • Rolling update – update a subset of running application instances instead of simultaneously updating every application instance
    • Blue/Green deployment – (also known as a red/black deployment), you perform two identical deployments of your application
    • GKE supports Rolling and Recreate deployments.
      • Rolling deployments support maxSurge (new pods would be created) and maxUnavailable (existing pods would be deleted)
    • Managed Instance groups support Rolling deployments using the
    • maxSurge (new pods would be created) and maxUnavailable (existing pods would be deleted) configurations
  • Testing Strategies
    • Canary testing – partially roll out a change and then evaluate its performance against a baseline deployment
    • A/B testing – test a hypothesis by using variant implementations. A/B testing is used to make business decisions (not only predictions) based on the results derived from data.
  • Spinnaker
    • Spinnaker supports Blue/Green rollouts by dynamically enabling and disabling traffic to a particular Kubernetes resource.
    • Spinnaker recommends comparing canary against an equivalent baseline, deployed at the same time instead of production deployment.

Cloud Operations Suite

  • Cloud Operations Suite provides everything from monitoring, alert, error reporting, metrics, diagnostics, debugging, trace.
  • Google Cloud Monitoring or Stackdriver Monitoring
    • Cloud Monitoring helps gain visibility into the performance, availability, and health of your applications and infrastructure.
    • Cloud Monitoring Agent/Ops Agent helps capture additional metrics like Memory utilization, Disk IOPS, etc.
    • Cloud Monitoring supports log exports where the logs can be sunk to Cloud Storage, Pub/Sub, BigQuery, or an external destination like Splunk.
    • Cloud Monitoring API supports push or export custom metrics
    • Uptime checks help check if the resource responds. It can check the availability of any public service on VM, App Engine, URL, GKE, or AWS Load Balancer.
    • Process health checks can be used to check if any process is healthy
  • Google Cloud Logging or Stackdriver logging
    • Cloud Logging provides real-time log management and analysis
    • Cloud Logging allows ingestion of custom log data from any source
    • Logs can be exported by configuring log sinks to BigQuery, Cloud Storage, or Pub/Sub.
    • Cloud Logging Agent can be installed for logging and capturing application logs.
    • Cloud Logging Agent uses fluentd and fluentd filter can be applied to filter, modify logs before being pushed to Cloud Logging.
    • VPC Flow Logs helps record network flows sent from and received by VM instances.
    • Cloud Logging Log-based metrics can be used to create alerts on logs.
    • Hint: If the logs from VM do not appear on Cloud Logging, check if the agent is installed and running and it has proper permissions to write the logs to Cloud Logging.
  • Cloud Error Reporting
    • counts, analyzes and aggregates the crashes in the running cloud services
  • Cloud Profiler
    • Cloud Profiler allows for monitoring of system resources like CPU and memory on both GCP and on-premises resources.
  • Cloud Trace
    • is a distributed tracing system that collects latency data from the applications and displays it in the Google Cloud Console.
  • Cloud Debugger
    • is a feature of Google Cloud that lets you inspect the state of a running application in real-time, without stopping or slowing it down
    • Debug Logpoints allow logging injection into running services without restarting or interfering with the normal function of the service
    • Debug Snapshots help capture local variables and the call stack at a specific line location in your app’s source code

Compute Services

  • Compute services like Google Compute Engine and Google Kubernetes Engine are lightly covered more from the security aspects
  • Google Compute Engine
    • Google Compute Engine is the best IaaS option for computing and provides fine-grained control
    • Preemptible VMs and their use cases. HINT – use for short term needs
    • Committed Usage Discounts – CUD help provide cost benefits for long-term stable and predictable usage.
    • Managed Instance Group can help scale VMs as per the demand. It also helps provide auto-healing and high availability with health checks, in case an application fails.
  • Google Kubernetes Engine
    • GKE can be scaled using
      • Cluster AutoScaler to scale the cluster
      • Vertical Pod Scaler to scale the pods with increasing resource needs
      • Horizontal Pod Autoscaler helps scale Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload’s CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics from sources outside of your cluster.
    • Kubernetes Secrets can be used to store secrets (although they are just base64 encoded values)
    • Kubernetes supports rolling and recreate deployment strategies.

Security

  • Cloud Key Management Service – KMS
    • Cloud KMS can be used to store keys to encrypt data in Cloud Storage and other integrated storage
  • Cloud Secret Manager
    • Cloud Secret Manager can be used to store secrets as well

Site Reliability Engineering – SRE

  • SRE is a DevOps implementation and focuses on increasing reliability and observability, collaboration, and reducing toil using automation.
  • SLOs help specify a target level for the reliability of your service using SLIs which provide actual measurements.
  •  SLI Types
    • Availability
    • Freshness
    • Latency
    • Quality
  • SLOs – Choosing the measurement method
    • Synthetic clients to measure user experience
    • Client-side instrumentation
    • Application and Infrastructure metrics
    • Logs processing
  • SLOs help defines Error Budget and Error Budget Policy which need to be aligned with all the stakeholders and help plan releases to focus on features vs reliability.
  • SRE focuses on Reducing Toil – Identifying repetitive tasks and automating them.
  • Production Readiness Review – PRR
    • Applications should be performance tested for volumes before being deployed to production
    • SLOs should not be modified/adjusted to facilitate production deployments. Teams should work to make the applications SLO compliant before they are deployed to production.
  • SRE Practices include
    • Incident Management and Response
      • Priority should be to mitigate the issue, and then investigate and find the root cause. Mitigating would include
        • Rollbacking the release causes issues
        • Routing traffic to working site to restore user experience
      • Incident Live State Document helps track the events and decision making which can be useful for postmortem.
      • involves the following roles
        • Incident Commander/Manager
          • Setup a communication channel for all to collaborate
          • Assign and delegate roles. IC would assume any role, if not delegated.
          • Responsible for Incident Live State Document
        • Communications Lead
          • Provide periodic updates to all the stakeholders and customers
        • Operations Lead
          • Responds to the incident and should be the only group modifying the system during an incident.
    • Postmortem
      • should contain the root cause
      • should be Blameless
      • should be shared with all for collaboration and feedback
      • should be shared with all the shareholders
      • should have proper action items to prevent recurrence with an owner and collaborators, if required.

All the Best !!

Google Cloud Operations

Google Cloud Operations

Google Cloud Operations provides integrated monitoring, logging, and trace managed services for applications and systems running on Google Cloud and beyond.

Google Cloud Operations Suite
Credit Priyanka Vergadia

Cloud Monitoring

  • Cloud Monitoring collects measurements of key aspects of the service and of the Google Cloud resources used.
  • Cloud Monitoring provides tools to visualize and monitor this data.
  • Cloud Monitoring helps gain visibility into the performance, availability, and health of the applications and infrastructure.
  • Cloud Monitoring collects metrics, events, and metadata from Google Cloud, AWS, hosted uptime probes, and application instrumentation.

Cloud Logging

  • Cloud Logging is a service for storing, viewing and interacting with logs.
  • Answers the questions “Who did what, where and when” within the GCP projects
  • Maintains non-tamperable audit logs for each project and organizations
  • Logs buckets are a regional resource, which means the infrastructure that stores, indexes, and searches the logs are located in a specific geographical location.

Error Reporting

  • Error Reporting aggregates and displays errors produced in the running cloud services.
  • Error Reporting provides a centralized error management interface, to help find the application’s top or new errors so that they can be fixed faster.

Cloud Profiler

  • Cloud Profiler helps with continuous CPU, heap, and other parameters profiling to improve performance and reduce costs.
  • Cloud Profiler is a continuous profiling tool that is designed for applications running on Google Cloud:
    • It’s a statistical, or sampling, profiler that has low overhead and is suitable for production environments.
    • It supports common languages and collects multiple profile types.
  • Cloud Profiler consists of the profiling agent, which collects the data, and a console interface on Google Cloud, which lets you view and analyze the data collected by the agent.
  • Cloud Profiler is supported for Compute Engine, App Engine, GKE, and applications running on on-premises as well.

Cloud Trace

  • Cloud Trace is a distributed tracing system that collects latency data from the applications and displays it in the Google Cloud Console.
  • Cloud Trace helps understand how long it takes the application to handle incoming requests from users or applications, and how long it takes to complete operations like RPC calls performed when handling the requests.
  • CloudTrace can track how requests propagate through the application and receive detailed near real-time performance insights.
  • Cloud Trace automatically analyzes all of the application’s traces to generate in-depth latency reports to surface performance degradations and can capture traces from all the VMs, containers, or App Engines.

Cloud Debugger

  • Cloud Debugger helps inspect the state of an application, at any code location, without stopping or slowing down the running app.
  • Cloud Debugger makes it easier to view the application state without adding logging statements.
  • Cloud Debugger adds less than 10ms to the request latency only when the application state is captured. In most cases, this is not noticeable by users.
  • Cloud Debugger can be used with or without access to your app’s source code.
  • Cloud Debugger supports Cloud Source Repositories, GitHub, Bitbucket, or GitLab as the source code repository. If the source code repository is not supported, the source files can be uploaded.
  • Cloud Debugger allows collaboration by sharing the debug session by sending the Console URL.
  • Cloud Debugger supports a range of IDE.

Debug Snapshots

  • Debug Snapshots capture local variables and the call stack at a specific line location in the app’s source code without stopping or slowing it down.
  • Certain conditions and locations can be specified to return a snapshot of the app’s data.
  • Debug Snapshots support canarying wherein the debugger agent tests the snapshot on a subset of the instances.

Debug Logpoints

  • Debug Logpoints allow you to inject logging into running services without restarting or interfering with the normal function of the service.
  • Debug Logpoints are useful for debugging production issues without having to add log statements and redeploy.
  • Debug Logpoints remain active for 24 hours after creation, or until they are deleted or the service is redeployed.
  • If a logpoint is placed on a line that receives lots of traffic, the Debugger throttles the logpoint to reduce its impact on the application.
  • Debug Logpoints support canarying wherein the debugger agent tests the logpoints on a subset of the instances.

References

Google_Cloud_Operations

Google Cloud CI/CD – Continuous Integration & Continuous Deployment

Google Cloud CI/CD

Google Cloud CI/CD provides various tools for continuous integration and deployment and also integrates seamlessly with third-party solutions.

Google Cloud CI/CD - Continuous Integration Continuous Deployment

Google Cloud Source Repositories – CSR

  • Cloud Source Repositories are fully-featured, private Git repositories hosted on Google Cloud.
  • Cloud Source Repositories can be used for collaborative, version-controlled development of any app or service, including those that run on App Engine and Compute Engine.
  • Cloud Source Repositories can connect to an existing GitHub or Bitbucket repository. Connected repositories are synchronized with Cloud Source Repositories automatically.
  • Cloud Source Repositories automatically send logs on repository activity to Cloud Logging to help track and troubleshoot data access.
  • Cloud Source Repositories offer security key detection to block git push transactions that contain sensitive information which helps improve the security of the source code.
  • Cloud Source Repositories provide built-in integrations with other GCP tools like Cloud Build, Cloud Debugger, Cloud Operations, Cloud Logging, Cloud Functions, and others that let you automatically build, test, deploy, and debug code within minutes.
  • Cloud Source Repositories publishes messages about the repository to Pub/Sub topic.
  • Cloud Source Repositories provide a search feature to search for specific files or code snippets.
  • Cloud Source Repositories allow permissions to be controlled at the project (all projects) or at the repo level.

Cloud Build

  • Cloud Build is a fully-managed, serverless service that executes builds on Google Cloud Platform’s infrastructure.
  • Cloud Build can pull/import source code from a variety of repositories or cloud storage spaces, execute a build to produce containers or artifacts, and push them to the artifact registry.
  • Cloud Build executes the build as a series of build steps, where each build step specifies an action to be performed and is run in a Docker container.
  • Build steps can be provided by Cloud Build and the Cloud Build community or can be custom as well.
  • Build config file contains instructions for Cloud Build to perform tasks based on your specifications for e.g., the build config file can contain instructions to build, package, and push Docker images.
  • Builds can be started either manually or using build triggers.
  • Cloud Build uses build triggers to enable CI/CD automation.
  • Build triggers can listen for incoming events, such as when a new commit is pushed to a repository or when a pull request is initiated, and then automatically execute a build when new events come in.
  • Cloud Build publishes messages on a Pub/Sub topic called cloud-builds when the build’s state changes, such as when the build is created, when the build transitions to a working state, and when the build completes.

Container Registry

  • Container Registry is a private container image registry that supports Docker Image Manifest V2 and OCI image formats.
  • Container Registry provides a subset of Artifact Registry features.
  • Container Registry stores its tags and layer files for container images in a Cloud Storage bucket in the same project as the registry.
  • Access to the bucket is configured using Cloud Storage’s identity and access management (IAM) settings.
  • Container Registry integrates seamlessly with Google Cloud services.
    Container Registry works with popular continuous integration and continuous delivery systems including Cloud Build and third-party tools such as Jenkins.

Artifact Registry

  • Artifact Registry is a fully-managed service with support for both container images and non-container artifacts, Artifact Registry extends the capabilities of Container Registry.
  • Artifact Registry is the recommended service for container image storage and management on Google Cloud.
  • Artifact Registry comes with fine-grained access control via Cloud IAM. This enables scoping permissions as granularly as possible, for example to specific regions or environments as necessary.
  • Artifact Registry supports the creation of regional repositories

Container Registry vs Artifact Registry

Google Cloud Container Registry Vs Artifact Registry

Google Cloud DevOps
Credit Priyanka Vergadia

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

 

Google Cloud Container Registry Vs Artifact Registry

Container Registry vs Artifact Registry

Google Cloud - Container Registry vs Artifact Registry

Container Registry

  • Container Registry is a private container image registry that supports Docker Image Manifest V2 and OCI image formats.
  • provides a subset of Artifact Registry features.
  • stores its tags and layer files for container images in a Cloud Storage bucket in the same project as the registry.
  • does not support fine-grained IAM access control. Access to the bucket is configured using Cloud Storage’s permissions.
  • integrates seamlessly with Google Cloud services and works with popular continuous integration and continuous delivery systems including Cloud Build and third-party tools such as Jenkins.
  • is used to store only docker images and does not support languages or os packages.
  • is only multi-regional and does not support regional repository.
  • supports a single repository within a project and automatically creates a repository in a multi-region.
  • uses gcr.io hosts.
  • uses gcloud container images commands.
  • supports CMEK(Customer-Managed encryption keys) to encrypt the storage buckets that contain the images.
  • supports several authentication methods for pushing and pulling images with a third-party client.
  • caches the most frequently requested Docker Hub images on mirror.gcr.io
  • supports VPC-Service Controls and can be added to a service perimeter.
  • hosts Google provided images on gcr.io
  • publishes changes to the gcr topic.
  • images can be viewed and managed from the Container registry section of Cloud Console.
  • pricing is based on Cloud Storage usage, including storage and network egress.

Artifact Registry

  • Artifact Registry is a fully-managed service with support for both container images and non-container artifacts, Artifact Registry extends the capabilities of Container Registry.
  • Artifact Registry is the recommended service for container image storage and management on Google Cloud. It is considered the successor of the Container Registry.
  • Artifact Registry comes with fine-grained access control via Cloud IAM using Artifact Registry permission. This enables scoping permissions as granularly as possible for e.g. to specific regions or environments as necessary
  • supports multi-regional or regional repositories.
  • uses pkg.dev hosts.
  • uses gcloud artifacts docker commands.
  • supports CMEK(Customer-Managed encryption keys) to encrypt individual repositories.
  • supports multiple repositories within the project and the repository should be manually created before pushing any images.
  • supports multiple artifact formats, including Container images, Java packages, and Node.js modules.
  • supports the same authentication method as Container Registry.
  • mirror.gcr.io continues to cache frequently requested images from Docker Hub.
  • supports VPC-Service Controls and can be added to a service perimeter.
  • hosts Google provided images on gcr.io
  • publishes changes to the gcr topic.
  • Artifact Registry and Container Registry repositories can be viewed from the Artifact Registry section of Cloud Console.
  • pricing is based on storage and network egress.

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

 

References

Artifact Registry vs Container Registry Feature Comparison

Black Friday & Cyber Monday Deals

Udemy – 2nd Dec to 3rd Dec

DolfinEd Courses

DolfinEd Cyber Monday

Braincert – Black Friday Sale – ends 26th Nov

Use Coupon Code – BLACK_FRIDAY_2021

AWS Certifications

GCP Certifications

Whizlabs – Black Friday Sale – 19th Nov to 2nd Dec

Both Subscriptions (W970 H90).png

Coursera

Limited-time offer: Get Coursera Plus Monthly for $1!

CCNF – Cyber Monday

Google Cloud Certified – Cloud Digital Leader Learning Path

Google Cloud Certified - Cloud Digital Leader Certificate

Google Cloud – Cloud Digital Leader Certification Learning Path

Continuing on the Google Cloud Journey, glad to have passed the seventh certification with the Professional Cloud Digital Leader certification. Google Cloud was missing the initial entry-level certification similar to AWS Cloud Practitioner certification, which was introduced as the Cloud Digital Leader certification. Cloud Digital Leader focuses on general Cloud knowledge,  Google Cloud knowledge with its products and services.

Google Cloud – Cloud Digital Leader Certification Summary

  • Had 59 questions (somewhat odd !!) to be answered in 90 minutes.
  • Covers a wide range of General Cloud and Google Cloud services and products knowledge.
  • This exam does not require much Hands-on and theoretical knowledge is good enough to clear the exam.

Google Cloud – Cloud Digital Leader Certification Resources

Google Cloud – Cloud Digital Leader Certification Topics

General cloud knowledge

  1. Define basic cloud technologies. Considerations include:
    1. Differentiate between traditional infrastructure, public cloud, and private cloud
      1. Traditional infrastructure includes on-premises data centers
      2. Public cloud include Google Cloud, AWS, and Azure
      3. Private Cloud includes services like AWS Outpost
    2. Define cloud infrastructure ownership
    3. Shared Responsibility Model
      1. Security of the Cloud is Google Cloud’s responsibility
      2. Security on the Cloud depends on the services used and is shared between Google Cloud and the Customer
    4. Essential characteristics of cloud computing
      1. On-demand computing
      2. Pay-as-you-use
      3. Scalability and Elasticity
      4. High Availability and Resiliency
      5. Security
  2. Differentiate cloud service models. Considerations include:
    1. Infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS)
      1. IaaS – everything is done by you – more flexibility more management
      2. PaaS – most of the things are done by Cloud with few things done by you – moderate flexibility and management
      3. SaaS – everything is taken care of by the Cloud, you would just it – no flexibility and management
    2. Describe the trade-offs between level of management versus flexibility when comparing cloud services
    3. Define the trade-offs between costs versus responsibility
    4. Appropriate implementation and alignment with given budget and resources
  3. Identify common cloud procurement financial concepts. Considerations include:
    1. Operating expenses (OpEx), capital expenditures (CapEx), and total cost of operations (TCO)
      1. On-premises has more of Capex and less OpEx
      2. Cloud has no to least Capex and more of OpEx
    2. Recognize the relationship between OpEx and CapEx related to networking and compute infrastructure
    3. Summarize the key cost differentiators between cloud and on-premises environments

General Google Cloud knowledge

  1. Recognize how Google Cloud meets common compliance requirements. Considerations include:
    1. Locating current Google Cloud compliance requirements
    2. Familiarity with Compliance Reports Manager
  2. Recognize the main elements of Google Cloud resource hierarchy. Considerations include:
    1. Describe the relationship between organization, folders, projects, and resources i.e. Organization -> Folder -> Folder or Projects -> Resources
  3. Describe controlling and optimizing Google Cloud costs. Considerations include:
    1. Google Cloud billing models and applicability to different service classes
    2. Define a consumption-based use model
    3. Application of discounts (e.g., flat-rate, committed-use discounts [CUD], sustained-use discounts [SUD])
      1. Sustained-use discounts [SUD] are automatic discounts for running specific resources for a significant portion of the billing month
      2. Committed use discounts [CUD] help with committed use contracts in return for deeply discounted prices for VM usage
  4. Describe Google Cloud’s geographical segmentation strategy. Considerations include:
    1. Regions are collections of zones. Zones have high-bandwidth, low-latency network connections to other zones in the same region. Regions help design fault-tolerant and highly available solutions.
    2. Zones are deployment areas within a region and provide the lowest latency usually less than 10ms
    3. Regional resources are accessible by any resources within the same region
    4. Zonal resources are hosted in a zone are called per-zone resources.
    5. Multiregional resources or Global resources are accessible by any resource in any zone within the same project.
  5. Define Google Cloud support options. Considerations include:
    1. Distinguish between billing support, technical support, role-based support, and enterprise support
      1. Role-Based Support provides more predictable rates and a flexible configuration. Although they are legacy, the exam does cover these.
      2. Enterprise Support provides the fastest case response times and a dedicated Technical Account Management (TAM) contact who helps you execute a Google Cloud strategy.
    2. Recognize a variety of Service Level Agreement (SLA) applications

Google Cloud products and services

  1. Describe the benefits of Google Cloud virtual machine (VM)-based compute options. Considerations include:
    1. Compute Engine provides virtual machines (VM) hosted on Google’s infrastructure.
    2. Google Cloud VMware Engine helps easy lift and shift VMware-based applications to Google Cloud without changes to the apps, tools, or processes
    3. Bare Metal lets businesses run specialized workloads such as Oracle databases close to Google Cloud while lowering overall costs and reducing risks associated with migration
    4. Custom versus standard sizing
    5. Free, premium, and custom service options
    6. Attached storage/disk options
    7. Preemptible VMs is an instance that can be created and run at a much lower price than normal instances.
  2. Identify and evaluate container-based compute options. Considerations include:
    1. Define the function of a container registry
      1. Container Registry is a single place to manage Docker images, perform vulnerability analysis, and decide who can access what with fine-grained access control.
    2. Distinguish between VMs, containers, and Google Kubernetes Engine
  3. Identify and evaluate serverless compute options. Considerations include:
    1. Define the function and use of App Engine, Cloud Functions, and Cloud Run
    2. Define rationale for versioning with serverless compute options
    3. Cost and performance tradeoffs of scale to zero
      1. Scale to zero helps provides cost efficiency by scaling down to zero when there is no load but comes with an issue with cold starts
      2. Serverless technologies like Cloud Functions, Cloud Run, App Standard Engine provides these capabilities
  4. Identify and evaluate multiple data management offerings. Considerations include:
    1. Describe the differences and benefits of Google Cloud’s relational and non-relational database offerings
      1. Cloud SQL provides fully managed, relational SQL databases and offers MySQL, PostgreSQL, MSSQL databases as a service
      2. Cloud Spanner provides fully managed, relational SQL databases with joins and secondary indexes
      3. Cloud Bigtable provides a scalable, fully managed, non-relational NoSQL wide-column analytical big data database service suitable for low-latency single-point lookups and precalculated analytics
      4. BigQuery provides fully managed, no-ops, OLAP, enterprise data warehouse (EDW) with SQL and fast ad-hoc queries.
    2. Describe Google Cloud’s database offerings and how they compare to commercial offerings
  5. Distinguish between ML/AI offerings. Considerations include:
    1. Describe the differences and benefits of Google Cloud’s hardware accelerators (e.g., Vision API, AI Platform, TPUs)
    2. Identify when to train your own model, use a Google Cloud pre-trained model, or build on an existing model
      1. Vision API provides out-of-the-box pre-trained models to extract data from images
      2. AutoML provides the ability to train models
      3. BigQuery Machine Learning provides support for limited models and SQL interface
  6. Differentiate between data movement and data pipelines. Considerations include:
    1. Describe Google Cloud’s data pipeline offerings
      1. Cloud Pub/Sub provides reliable, many-to-many, asynchronous messaging between applications. By decoupling senders and receivers, Google Cloud Pub/Sub allows developers to communicate between independently written applications.
      2. Cloud Dataflow is a fully managed service for strongly consistent, parallel data-processing pipelines
      3. Cloud Data Fusion is a fully managed, cloud-native, enterprise data integration service for quickly building & managing data pipelines
      4. BigQuery Service is a fully managed, highly scalable data analysis service that enables businesses to analyze Big Data.
      5. Looker provides an enterprise platform for business intelligence, data applications, and embedded analytics.
    2. Define data ingestion options
  7. Apply use cases to a high-level Google Cloud architecture. Considerations include:
    1. Define Google Cloud’s offerings around the Software Development Life Cycle (SDLC)
    2. Describe Google Cloud’s platform visibility and alerting offerings covers Cloud Monitoring and Cloud Logging
  8. Describe solutions for migrating workloads to Google Cloud. Considerations include:
    1. Identify data migration options
    2. Differentiate when to use Migrate for Compute Engine versus Migrate for Anthos
      1. Migrate for Compute Engine provides fast, flexible, and safe migration to Google Cloud
      2. Migrate for Anthos and GKE makes it fast and easy to modernize traditional applications away from virtual machines and into native containers. This significantly reduces the cost and labor that would be required for a manual application modernization project.
    3. Distinguish between lift and shift versus application modernization
      1. involves lift and shift migration with zero to minimal changes and is usually performed with time constraints
      2. Application modernization requires a redesign of infra and applications and takes time. It can include moving legacy monolithic architecture to microservices architecture, building CI/CD pipelines for automated builds and deployments, frequent releases with zero downtime, etc.
  9. Describe networking to on-premises locations. Considerations include:
    1. Define Software-Defined WAN (SD-WAN) – did not have any questions regarding the same.
    2. Determine the best connectivity option based on networking and security requirements – covers Cloud VPN, Interconnect, and Peering.
    3. Private Google Access provides access from VM instances to Google provides services like Cloud Storage or third-party provided services
  10. Define identity and access features. Considerations include:
    1. Cloud Identity & Access Management (Cloud IAM) provides administrators the ability to manage cloud resources centrally by controlling who can take what action on specific resources.
    2. Google Cloud Directory Sync enables administrators to synchronize users, groups, and other data from an Active Directory/LDAP service to their Google Cloud domain directory.

Google Cloud Compute Options

Google Cloud Compute Options

Compute Engine

  • provides Infrastructure as a Service (IaaS) in the Google Cloud
  • provides full control/flexibility on the choice of OS, resources like CPU and memory
  • Usage patterns
    • lift and shift migrations of existing systems
    • existing VM images to move to the cloud
    • need low-level access to or fine-grained control of the operating system, network, and other operational characteristics.
    • require custom kernel or arbitrary OS
    • software that can’t be easily containerized
    • using a third party licensed software
  • Usage anti-patterns
    • containerized applications – Choose App Engine, GKE, or Cloud Run
    • stateless event-driven applications – Choose Cloud Functions

App Engine

  • helps build highly scalable web and mobile backend applications on a fully managed serverless platform
  • Usage patterns
    • Rapidly developing CRUD-heavy applications
    • HTTP/S based applications
    • Deploying complex APIs
  • Usage anti-patterns
    • Stateful applications requiring lots of in-memory states to meet the performance or functional requirements
    • Systems that require protocols other than HTTP

Google Kubernetes Engine – GKE

  • provides a managed environment for deploying, managing, and scaling containerized applications using Google infrastructure.
  • Usage patterns
    • containerized applications or those that can be easily containerized
    • Hybrid or multi-cloud environments
    • Systems leveraging stateful and stateless services
    • Strong CI/CD Pipelines
  • Usage anti-patterns
    • non-containerized applications – Choose CE or App engine
    • applications requiring very low-level access to the underlying hardware like custom kernel, networking, etc. – Choose CE
    • stateless event-driven applications – Choose Cloud Functions

Cloud Run

  • provides a serverless managed compute platform to run stateless, isolated containers without orchestration that can be invoked via web requests or Pub/Sub events.
  • abstracts away all infrastructure management allowing users to focus on building great applications.
  • is built from Knative.
  • Usage patterns
    • Stateless services that are easily containerized
    • Event-driven applications and systems
    • Applications that require custom system and language dependencies
  • Usage anti-patterns
    • Highly stateful systems
    • Systems that require protocols other than HTTP
    • Compliance requirements that demand strict controls over the low-level environment and infrastructure (might be okay with the Knative GKE mode)

Cloud Functions

  • provides serverless compute for event-driven apps
  • Usage patterns
    • ephemeral and event-driven applications and functions
    • fully managed environment
    • pay only for what you use
    • quick data transformations (ETL)
  • Usage anti-patterns
    • continuous stateful application – Choose CE, App Engine or GKE
Credit @ https://thecloudgirl.dev/

Google Cloud Compute Options Decision Tree

Google Cloud Compute Options Decision Tree

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your organization is developing a new application. This application responds to events created by already running applications. The business goal for the new application is to scale to handle spikes in the flow of incoming events while minimizing administrative work for the team. Which Google Cloud product or feature should you choose?
    1. Cloud Run
    2. Cloud Run for Anthos
    3. App Engine standard environment
    4. Compute Engine
  2. A company wants to build an application that stores images in a Cloud Storage bucket and wants to generate thumbnails as well as resize the images. They want to use managed service which will help them scale automatically from zero to scale and back to zero. Which GCP service satisfies the requirement?
    1. Google Compute Engine
    2. Google Kubernetes Engine
    3. Google App Engine
    4. Cloud Functions

Google Cloud Composer

Cloud Composer

  • Cloud Composer is a fully managed workflow orchestration service, built on Apache Airflow, enabling workflow creation that spans across clouds and on-premises data centers.
  • Cloud Composer requires no installation or has no management overhead.
  • Cloud Composer integrates with Cloud Logging and Cloud Monitoring to provide a central place to view all Airflow service and workflow logs.

Cloud Composer Components

  • Cloud Composer helps define a series of tasks as Workflow executed within an Environment
  • Workflows are created using DAGs or Direct Acyclic Graphs
  • DAG is a collection of tasks that are scheduled and executed, organized in a way that reflects their relationships and dependencies.
  • DAGs are stored in Cloud Storage
  • Each Task can represent anything from ingestion, transform, filtering, monitoring, preparing, etc.
  • Environments are self-contained Airflow deployments based on Google Kubernetes Engine, and they work with other Google Cloud services using connectors built into Airflow.
  • Cloud Composer environment is a wrapper around Apache Airflow with components like GKE Cluster, Web Server, Database, Cloud Storage.

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your company has a hybrid cloud initiative. You have a complex data pipeline that moves data between cloud provider services and leverages services from each of the cloud providers. Which cloud-native service should you use to orchestrate the entire pipeline?
    1. Cloud Dataflow
    2. Cloud Composer
    3. Cloud Dataprep
    4. Cloud Dataproc
  2. Your company is working on a multi-cloud initiative. The data processing pipelines require creating workflows that connect data, transfer data, processing, and using services across clouds. What cloud-native tool should be used for orchestration?
    1. Cloud Scheduler
    2. Cloud Dataflow
    3. Cloud Composer
    4. Cloud Dataproc

Google Cloud Dataflow vs Dataproc

Google Cloud Dataflow vs Dataproc

Cloud Dataproc

  • Cloud Dataproc is a managed Spark and Hadoop service that lets you take advantage of open-source data tools for batch processing, querying, streaming, and machine learning.
  • Cloud Dataproc provides a Hadoop cluster, on GCP, and access to Hadoop-ecosystem tools (e.g. Apache Pig, Hive, and Spark); this has strong appeal if already familiar with Hadoop tools and have Hadoop jobs
  • Ideal for Lift and Shift migration of existing Hadoop environment
  • Requires manual provisioning of clusters
  • Consider Dataproc
    • If you have a substantial investment in Apache Spark or Hadoop on-premise and considering moving to the cloud
    • If you are looking at a Hybrid cloud and need portability across a private/multi-cloud environment
    • If in the current environment Spark is the primary machine learning tool and platform
    • In case the code depends on any custom packages along with distributed computing need

Cloud Dataflow

  • Google Cloud Dataflow is a fully managed, serverless service for unified stream and batch data processing requirements
  • When using it as a pre-processing pipeline for ML model that can be deployed in GCP AI Platform Training (earlier called Cloud ML Engine)
  • None of the above considerations made for Cloud Dataproc is relevant

Cloud Dataflow vs Dataproc Decision Tree

Dataflow vs Dataproc

Dataflow vs Dataproc Table

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local data center. You want to utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change. Which product should you use?
    1. Google Cloud Dataflow
    2. Google Cloud Dataproc
    3. Google Compute Engine
    4. Google Container Engine
  2. A startup plans to use a data processing platform, which supports both batch and streaming applications. They would prefer to have a hands-off/serverless data processing platform to start with. Which GCP service is suited for them?
    1. Dataproc
    2. Dataprep
    3. Dataflow
    4. BigQuery

References

Google Cloud BigQuery Data Transfer Service

Cloud BigQuery Data Transfer Service

  • BigQuery Data Transfer Service automates data movement into BigQuery on a scheduled, managed basis
  • After a data transfer is configured, the BigQuery Data Transfer Service automatically loads data into BigQuery on a regular basis.
  • BigQuery Data Transfer Service can also initiate data backfills to recover from any outages or gaps.
  • BigQuery Data Transfer Service can only sink data to BigQuery and cannot be used to transfer data out of BigQuery.

BigQuery Data Transfer Service Sources

  • BigQuery Data Transfer Service supports loading data from the following data sources:
    • Google Software as a Service (SaaS) apps
    • Campaign Manager
    • Cloud Storage
    • Google Ad Manager
    • Google Ads
    • Google Merchant Center (beta)
    • Google Play
    • Search Ads 360 (beta)
    • YouTube Channel reports
    • YouTube Content Owner reports
    • External cloud storage providers
      • Amazon S3
    • Data warehouses
      • Teradata
      • Amazon Redshift

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your company uses Google Analytics for tracking. You need to export the session and hit data from a Google Analytics 360 reporting view on a scheduled basis into BigQuery for analysis. How can the data be exported?
    1. Configure a scheduler in Google Analytics to convert the Google Analytics data to JSON format, then import directly into BigQuery using bq command line.
    2. Use gsutil to export the Google Analytics data to Cloud Storage, then import into BigQuery and schedule it using Cron.
    3. Import data to BigQuery directly from Google Analytics using Cron
    4. Use BigQuery Data Transfer Service to import the data from Google Analytics

Reference

Google_Cloud_BigQuery_Transfer_Service