Google Cloud – Professional Data Engineer Certification learning path

Google Cloud – Professional Data Engineer Certification Learning Path

I just recertified on my Google Cloud Certified – Professional Data Engineer certification. The first attempt on the Data Engineer exam has already been 2 long years which lasted for 4 hours with 95 questions. Once again, similar to the other Google Cloud certification exams, the Data Engineer exam covers not only the gamut of services and concepts but also focuses on logical thinking and practical experience.

Google Cloud – Professional Cloud Data Engineer Certification Summary

  • Cloud Data Engineer exam had 50 questions to be answered in 2 hours
  • Covers a wide range of data services including machine learning, with other topics covering storage and security.
  • Exam does not cover any case studies
  • Although the exam covers the latest services, it has not been updated for Cloud Monitoring and Logging and still refers to Stackdriver.
  • Nothing much on Compute and Network is covered
  • Questions sometimes test your logical thinking rather than any concept regarding Google Cloud.
  • Hands-on is MUST, if you have not worked on GCP before make sure you do lots of labs else you would be absolutely clueless about some of the questions and commands
  • Be sure that NO Online Courses or Practice tests are going to cover all. I did Coursera, LinuxAcademy which is really vast, but hands-on or practical knowledge is MUST.

Google Cloud – Professional Cloud Data Engineer Certification Resources

Google Cloud – Professional Cloud Data Engineer Certification Topics

Data & Analytics Services

  • Obviously, there are lots and lots of data and related services
  • Google Cloud Data & Analytics Services Cheatsheet
  • Know the Big Data stack and understand which service fits the different layers of ingest, store, process, analytics
  • Cloud BigQuery
    • provides scalable, fully managed enterprise data warehouse (EDW) with SQL and fast ad-hoc queries.
    • ideal for storage and analytics.
    • provides the same cost-effective option for storage as Cloud Storage
    • understand BigQuery Security
      • use BigQuery IAM access roles to control data and querying access
      • use Authorized views to access control tables, columns within tables, and query results. HINT: Authorized views need to reside in a different dataset as compared to the source dataset.
      • support data encryption
    • understand BigQuery Best Practices including key strategy, cost optimization, partitioning, and clustering
      • use dry run to estimate costs
      • use partitioning and clustering to limit the amount of data scanned
      • using external data sources might result in query performance degradation and its better to import the data
    • Dataset location can be set ONLY at the time of its creation.
    • supports schema auto-detection for JSON and CSV files.
    • understand how BigQuery Streaming works
    • know BigQuery limitations esp. with updates and inserts
    • supports an external data source (federated data source)
      • which is a data source that can be queried directly even though the data is not stored in BigQuery.
      • offers support for querying data directly from:
        • Cloud Bigtable
        • Cloud Storage
        • Google Drive
        • Cloud SQL
      • Use Permanent table for querying an external data source multiple times
      • Use Temporary table for querying an external data source for one-time, ad-hoc queries over external data, or for extract, transform, and load (ETL) processes.
  • Cloud Bigtable
    • provides column database suitable for both low-latency single-point lookups and precalculated analytics
    • understand Bigtable is not for long term storage as it is quite expensive
    • know the differences with HBase
    • Know how to measure performance and scale
    • supports Development and Production mode. Development mode can be upgraded to production and not vice versa.
    • supports HDD and SDD storage during cluster creation. HDD can be converted to SDD by exporting the data to the new instance.
    • understand Bigtable Replication. Can be used to separate real-time and batch workloads on the same instance using application profiles.
  • Cloud Pub/Sub
    • as the messaging service to capture real-time data esp. IoT
    • is designed to provide reliable, many-to-many, asynchronous messaging between applications esp. real-time IoT data capture
    • guarantees at-least-once (but not exactly once) message delivery and can result in data duplication if the message is not ack within a defined time period.
    • how it compares to Kafka (HINT: provides only 7 days of retention vs Kafka which depends on the storage)
  • Cloud Dataflow
    • to process, transform, transfer data and the key service to integrate store and analytics.
    • know how to improve a Dataflow performance
    • understand Apache Beam features as well
      • understand PCollections, Transforms, ParDo and what they do
      • understand windowing, watermarks, triggers Hint: windowing and watermarks can be used to handle delayed messages
    • supports drain feature to finish existing jobs but stop processing new ones, usually useful for deploying incompatible breaking changes
    • canceling a job will lead to an immediate stop and in-flight data loss.
  • Cloud Dataprep
    • to clean and prepare data. It can be used for anomaly detection.
    • does not need any programming language knowledge and can be done through the graphical interface
    • be sure to know or try hands-on on a dataset
  • Cloud Dataproc
    • to handle existing Hadoop/Spark jobs
    • supports connector for BigQuery, Bigtable, Cloud Storage
    • supports Ephermal clusters and with Cloud Storage connector support the data can be stored in GCS instead of HDFS
    • you need to know how to improve the performance of the Hadoop cluster as well :). Know how to configure the Hadoop cluster to use all the cores (hint- spark executor cores) and handle out of memory errors (hint – executor memory)
    • Secondary workers can be used to scale with the below limitations
      • Processing only with no data storage
      • No secondary-worker-only clusters
      • Persistent disk size is used for local caching of data and is not available through HDFS.
    • how to install other components (hint – initialization actions)
  • Cloud Datalab
    • is an interactive tool for exploration, transformation, analysis, and visualization of your data on Google Cloud Platform
    • based on Jupyter
  • Cloud Composer
    • fully managed workflow orchestration service, based on Apache Airflow, enabling workflow creation that spans across clouds and on-premises data centers.
    • pipelines are configured as directed acyclic graphs (DAGs)
    • workflow lives on-premises, in multiple clouds, or fully within GCP.
    • provides the ability to author, schedule, and monitor the workflows in a unified manner

Identity Services

  • Cloud IAM 
    • provides administrators the ability to manage cloud resources centrally by controlling who can take what action on specific resources.
    • Understand how IAM works and how rules apply esp. the hierarchy from Organization -> Folder -> Project -> Resources
    • Understand IAM Best practices

Storage Services

  • Understand each storage service option and its use cases.
  • Cloud Storage
    • cost-effective object storage for unstructured data.
    • very important to know the different classes and their use cases esp. Regional and Multi-Regional (frequent access), Nearline (monthly access), and Coldline (yearly access)
    • Understand Signed URL to give temporary access and the users do not need to be GCP users
    • Understand permissions – IAM vs ACLs (fine-grained control)
  • Cloud SQL
    • is a fully-managed service that provides MySQL and PostgreSQL only.
    • Limited to 10TB and is a regional service.
    • No direct options for Oracle yet.
  • Cloud Spanner
    • is a fully managed, mission-critical relational database service.
    • provides a scalable online transaction processing (OLTP) database with high availability and strong consistency at a global scale.
    • globally distributed and can scale and handle more than 10TB.
    • not a direct replacement and would need migration
  • Cloud Datastore
    • provides document database for web and mobile applications. Datastore is not for analytics
    • Understand Datastore indexes and how to update indexes for Datastore

Machine Learning

  • Google expects the Data Engineer to surely know some of the Data scientists stuff
  • Understand the different algorithms
    • Supervised Learning (labeled data)
      • Classification (for e.g. Spam or Not)
      • Regression (for e.g. Stock or House prices)
    • Unsupervised Learning (Unlabelled data)
      • Clustering (for e.g. categories)
    • Reinforcement Learning
  • Know Cloud ML with Tensorflow
  • Know all the Cloud AI products which include
    • Cloud Vision
    • Cloud Natural Language
    • Cloud Speech-to-Text
    • Cloud Video Intelligence
    • Cloud Dialogflow
  • Cloud AutoML products, which can help you get started without much machine learning experience

Monitoring

  • Cloud Monitoring and Logging
    • provides everything from monitoring, alert, error reporting, metrics, diagnostics, debugging, trace.
    • remember audits are mainly checking Cloud Logging entries
    • Aggregated sink can then route log entries from the organization or folder, plus (recursively) from any contained folders, billing accounts, or projects

Security Services

Other Services

  • Storage Transfer Service 
    • allows import of large amounts of online data into Google Cloud Storage, quickly and cost-effectively. Online data is the key here as it supports AWS S3, HTTP/HTTPS, and other GCS buckets. If the data is on-premises you need to use the gsutil command
  • Transfer Appliance 
    • to transfer large amounts of data quickly and cost-effectively into Google Cloud Platform. Check for the data size and it would be always compared with Google Transfer Service or gsutil commands.
  • BigQuery Data Transfer Service
    • to integrate with third-party services and load data into BigQuery

10 thoughts on “Google Cloud – Professional Data Engineer Certification learning path

  1. Hi Jayendra
    Thanks for your detailed explanation.

    Would it make a difference if I appear for Associate Cloud Engineer prior to Data Engineer one? I have no experience on GCP platform but I come from data background.

    Appreciate your feedback.

    1. It would not make much difference as the services covered by each exam are different and with a very little overlap maybe 20% at the concept level.
      Data Engineer is quite a tough exam to crack, so make sure you prepare well.

      1. Thank you, Jayendra for your feedback and directions. I will give-in my 100% and follow your guidance as per this post.

  2. Hi dude.. seems there is change in the pattern starting Mar 29. Do u have any info on that?.. Can you pls provide some info on that if you have.

    1. The pattern listed is based Pilot exam, which has replaced the old pattern. The new exam doesn’t have any case studies.

    1. Hi Saurabh, i had appeared for the pilot exam for Data Engg and it should be already inline with the latest exam.

  3. Hi Jayendra,

    Did you get any programming questions to solve using any of the services? I have little knowledge on programming. Wanted to know if I have to go through some basics of Python or other language to solve anything in the exam?

    Thanks
    Surya

Comments are closed.