Google Cloud Hybrid Connectivity

Google Cloud Hybrid Connectivity

Google Cloud provides various network connectivity options to meet the needs, using either public networks, peering, or interconnect technologies

Google Cloud Hybrid Connectivity Options

Public Network Connectivity

Standard internet connection can be used to connect Google Cloud with the on-premises environment if it meets the bandwidth needs.

Cloud VPN

  • provides secure, private connectivity using IPSec
  • connects on-premises networks to VPC or two VPCs in GCP
  • traffic flows via the VPN tunnel but is still routed over the public internet
  • traffic is encrypted by one gateway and decrypted by the other
  • allows users to access private RFC1918 addresses on resources in the VPC from on-prem computers also using private RFC1918 addresses.
  • can be used with Private Google Access for on-premises hosts
  • provides guaranteed uptime of 99.99% using High availability VPN
  • supports only site-to-site VPN
  • supports up to 3Gbps per tunnel with a maximum of 8 tunnels
  • supports static as well as dynamic routing using Cloud Router
  • supports IKEv1 or IKEv2 using a shared secret

Peering

  • Peering provides better connectivity to Google Cloud as compared to the public connection. However, the connectivity is still not RFC1918-to-RFC1918 private address connectivity.
  • Peering gets your network as close as possible to Google Cloud public IP addresses.

Direct Peering

  • requires you to lease co-lo space and install and support routing equipment in a Google Point Of Presence (PoP).
  • supports BGP over a link to exchange network routes.
  • All traffic destined to Google rides over this new link, while traffic to other sites on the internet rides your regular internet connection.

Carrier Peering

  • preferred if installing equipment isn’t an option or would prefer to work with a service provider partner as an intermediary to peer with Google
  • connection to Google is via a new link connection installed to a partner carrier that is already connected to the Google network itself.
  • supports BGP or uses static routing over that link.
  • All traffic destined to Google rides over this new link.
  • Traffic to other sites on the internet rides your regular internet connection.

Interconnect

  • Interconnects are similar to peering in that the connections get your network as close as possible to the Google network.
  • Interconnects differ from peering as they provide connectivity using private address space into the Google VPC.
  • For RFC1918-to-RFC1918 private address connectivity, either a dedicated or partner interconnect is required

Dedicated Interconnect

  • provides private, high-performance connectivity to Google Cloud
  • requires you to lease co-lo space and install and support routing equipment in a Google Point Of Presence (PoP).
  • requires installing a link directly to Google by choosing a 10 Gbps or 200 Gbps pipe and provisioning a VLAN attachment over the physical link
  • gives the RFC1918-to-RFC1918 private address connectivity.
  • All traffic destined to the Google Cloud VPC rides over this new link.
  • Traffic to other sites on the internet rides the regular internet connection.
  • Single Interconnect connection does not offer HA and GCP recommends redundancy using 2 (99.9%) or 4 (99.99%) interconnect connections so that if one connection fails, the other connection can continue to serve traffic

Partner Interconnect

  • provides private, high-performance connectivity to Google Cloud
  • preferred if bandwidth requires are below 10 Gbps or installing equipment isn’t an option or would prefer to work with a service provider partner as an intermediary
  • similar to carrier peering in that you connect to a partner service provider that is directly connected to Google.
  • supports BGP or use static routing over that link.
  • requires provisioning a VLAN attachment over the physical link
  • gives the RFC1918-to-RFC1918 private address connectivity.
  • All traffic destined to your Google VPC rides over this new link.
  • Traffic to other sites on the internet rides your regular internet connection.

Google Cloud Hydrid Connectivity Decision Tree

Google Cloud Hydrid Connectivity Decision Tree

Google Cloud Hybrid Connectivity

Google Cloud Interconnect

Google Cloud Interconnect

  • Google Cloud Interconnect provides two options for extending the on-premises network to the VPC networks in Google Cloud.
    • Dedicated Interconnect (Dedicated connection) provides a direct physical connection between the on-premises network and Google’s network
    • Partner Interconnect (Use a service provider) provides connectivity between the on-premises and VPC networks through a supported service provider.
  • Cloud Interconnect provides access to all Google Cloud products and services from the on-premises network except Google Workspace.
  • Cloud Interconnect also allows access to supported APIs and services by using Private Google Access from on-premises hosts.

Dedicated Interconnect

  • Dedicated Interconnect provides direct physical connections between the on-premises network and Google’s network.
  • Dedicated Interconnect enables the transfer of large amounts of data between networks, which can be more cost-effective than purchasing additional bandwidth over the public internet.
  • Dedicated Interconnect requires your network to physically meet Google’s network in a colocation facility with your own routing equipment
  • Dedicated Interconnect supports only dynamic routing
  • Dedicate Interconnect supports bandwidth to 10 Gbps minimum to 200 Gbps maximum.
  • VLAN attachment should be associated with a Cloud Router.
  • Cloud Router creates a BGP session for the VLAN attachment and its corresponding on-premises peer router.
  • Cloud Router receives the routes that the on-premises router advertises. These routes are added as custom dynamic routes in the VPC network.
  • Cloud Router also advertises routes for Google Cloud resources to the on-premises peer router.

Google Cloud Dedicated Interconnect

Dedicated Interconnect Provisioning

  • Find a collocation facility with GCP Point of Presence (PoP) which offers Direct Interconnect connections
  • Order an Interconnect connection so that Google can allocate the necessary resources and send a Letter of Authorization and Connecting Facility Assignment (LOA-CFA).
  • LOA-CFA is sent via email to NOC (technical contact) or can be download from the Google Cloud console.
  • Submit the LOA-CFA to the vendor so that they can provision the Interconnect connections between Google’s network and your network.
  • Configure and test the connections with Google before you can use them.
  • Create VLAN attachments to allocate a VLAN on the connection.
  • Configure the on-premises router to establish a BGP session with the Cloud Router

Dedicated Interconnect Redundancy

  • Single Dedicated Interconnect connection does not offer redundancy or high availability
  • Google recommends redundancy using 2 (99.9%) or 4 (99.99%) interconnect connections so that if one connection fails, the other connection can continue to serve traffic
  • Redundant Interconnect connection with 2 connections must be created in the same metropolitan area (city) as the existing one, but in a different edge availability domain (metro availability zone).
  • Redundant Interconnect connection with 4 connections must be created with 2 connections in two different metropolitan areas (city), and each connection in a different edge availability domain (metro availability zone)
  • Dynamic routing mode for the VPC network must be global so that Cloud Router can advertise all subnets and propagate learned routes to all subnets regardless of the subnet’s region.

Google Cloud Dedicated Interconnect Redundancy

Partner Interconnect

  • Partner Interconnect provides connectivity between the on-premises network and the VPC network through a supported service provider
  • A Partner Interconnect connection is useful if the data center is in a physical location that can’t reach a Dedicated Interconnect colocation facility, or the data needs don’t warrant an entire 10-Gbps connection.
  • Partner Interconnect supports bandwidth to 50 Mbps minimum to 10 Gbps maximum.
  • Service providers have existing physical connections to Google’s network that they make available for their customers to use.
  • After the connectivity with a service provider is established, a Partner Interconnect connection from the service provider can be requested.
  • After the service provider provisions the connection, you can start passing traffic between your networks by using the service provider’s network.
  • Partner Interconnect provides Layer 2 and Layer 3 connectivity
    • For Layer 2 connections
      • you must configure and establish a BGP session between the Cloud Routers and on-premises routers for each created VLAN attachment
      • BGP configuration information is provided by the VLAN attachment after your service provider has configured it.
    • For Layer 3 connections
      • The service provider establishes a BGP session between the Cloud Routers and their edge routers for each VLAN attachment.
      • You don’t need to configure BGP on the on-premises router. Google and the service provider automatically set the correct configuration

Google Cloud Partner Interconnect

Partner Interconnect Provisioning

  • Connect the on-premises network to a supported service provider.
  • Create a VLAN attachment for a Partner Interconnect connection in the Google Cloud project, which generates a unique pairing key that must be used to request a connection from the service provider.
  • Activate the connection
  • Depending on the connection, either you or your service provider then establishes a Border Gateway Protocol (BGP) session.
  • Partner Interconnect provisioning does not require LOA-CFA

Partner Interconnect Redundancy

  • Single Partner Interconnect connection does not offer redundancy or high availability
  • 99.9% availability requires
    • At least two VLAN attachments in a single Google Cloud region, in separate edge availability domains (metro availability zones).
    • At least one Cloud Router, connected to both VLAN attachments.
  • 99.99% availability requires
    • At least four VLAN attachments across two metros, one in each edge availability domain (metro availability zone)
    • Two Cloud Routers (one in each Google Cloud region of a VPC network).
    • Associate one Cloud Router with each pair of VLAN attachments.
  • Dynamic routing mode for the VPC network must be global so that Cloud Router can advertise all subnets and propagate learned routes to all subnets regardless of the subnet’s region.

Google Cloud Dedicated Interconnect Redundancy - Layer 2

Cloud Interconnect Security

  • Cloud Interconnect does not encrypt the connection between your network and Google’s network.
  • Currently, Cloud VPN can’t be used with Dedicated Interconnect.
  • For additional security, use application-level encryption or your own VPN.

Dedicated Interconnect vs Partner Interconnect

  • Choosing between Dedicate Interconnect vs Partner Interconnect, consider the connection requirements, such as the connection location and capacity.
    • If you can’t physically meet Google’s network in a colocation facility to reach your VPC networks, you can use Partner Interconnect to connect to service providers that connect directly to Google:
    • If you have high bandwidth needs, Dedicated Interconnect can be a cost-effective solution.
    • If you require a lower bandwidth solution, Dedicated Interconnect and Partner Interconnect provide capacity options starting at 50 Mbps.

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google
    Cloud Platform. The database is 4 TB, and large updates are frequent. Replication requires private address space communication.
    Which networking approach should you use?

    1. Google Cloud Dedicated Interconnect
    2. Google Cloud VPN connected to the data center network
    3. A NAT and TLS translation gateway installed on-premises
    4. A Google Compute Engine instance with a VPN server installed connected to the data center network
  2. A company wants to connect cloud applications to an Oracle database in its data center. Requirements are a maximum of 20 Gbps
    of data and a Service Level Agreement (SLA) of 99%. Which option best suits the requirements?

    1. Implement a high-throughput Cloud VPN connection
    2. Cloud Router with VPN
    3. Dedicated Interconnect
    4. Partner Interconnect
  3. A company wants to connect cloud applications to an Oracle database in its data center. Requirements are a maximum of 9 Gbps
    of data and a Service Level Agreement (SLA) of 99%. Which option best suits the requirements?

    1. Implement a high-throughput Cloud VPN connection
    2. Cloud Router with VPN
    3. Dedicated Interconnect
    4. Partner Interconnect

Google Cloud Data Transfer Services

Google Cloud Data Transfer Services

Google Cloud Data Transfer services provide various options in terms of network and transfer tools to help transfer data from on-premises to Google Cloud network

Network Services

Cloud VPN

  • Provides network connectivity with Google Cloud between on-premises network and Google Cloud, or from Google Cloud to another cloud provider.
  • Cloud VPN still routes the traffic through the Internet.
  • Cloud VPN is quick to set up (as compared to Interconnect)
  • Each Cloud VPN tunnel can support up to 3 Gbps total for ingress and egress, but available bandwidth depends on the connectivity
  • Choose Cloud VPN to encrypt traffic to Google Cloud, or with lower throughput solution, or experimenting with migrating the workloads to Google Cloud
  • Cloud Interconnect offers a direct connection to Google Cloud through Google or one of the Cloud Interconnect service providers.
  • Cloud Interconnect service prevents data from going on the public internet and can provide a more consistent throughput for large data transfers
  • For enterprise-grade connection to Google Cloud that has higher throughput requirements, choose Dedicated Interconnect (10 Gbps to 100 Gbps) or Partner Interconnect (50 Mbps to 50 Gbps)
  • Cloud Interconnect provides access to all Google Cloud products and services from your on-premises network except Google Workspace.
  • Cloud Interconnect also allows access to supported APIs and services by using Private Google Access from on-premises hosts.
  • Direct Peering provides access to the Google network with fewer network hops than with a public internet connection
  • By using Direct Peering, internet traffic is exchanged between the customer network and Google’s Edge Points of Presence (PoPs), which means the data does not use the public internet.

Google Cloud Networking Services Decision Tree

Google Cloud Hybrid Connectivity

Transfer Services

gsutil

  • gsutil tool is the standard tool for small- to medium-sized transfers (less than 1 TB) over a typical enterprise-scale network, from a private data center to Google Cloud.
  • gsutil provides all the basic features needed to manage the Cloud Storage instances, including copying the data to and from the local file system and Cloud Storage.
  • gsutil can also move, rename and remove objects and perform real-time incremental syncs, like rsync, to a Cloud Storage bucket.
  • gsutil is especially useful in the following scenarios:
    • as-needed transfers or during command-line sessions by your users.
    • transferring only a few files or very large files, or both.
    • consuming the output of a program (streaming output to Cloud Storage)
    • watch a directory with a moderate number of files and sync any updates with very low latencies.
  • gsutil provides following features
    • Parallel multi-threaded transfers with  gsutil -m, increasing transfer speeds.
    • Composite transfers for a single large file to break them into smaller chunks to increase transfer speed. Chunks are transferred and validated in parallel, sending all data to Google. Once the chunks arrive at Google, they are combined (referred to as compositing) to form a single object
  • Storage Transfer Service is a fully managed, highly scalable service to automate transfers from other public clouds into Cloud Storage.
  • Storage Transfer Service for Cloud-to-Cloud transfers
    • supports transfers into Cloud Storage from S3 and HTTP.
    • supports daily copies of any modified objects.
    • doesn’t currently support data transfers to S3.
  • Storage Transfer Service also supports data transfers for on-premises data transfers from network file system (NFS) storage to Cloud Storage.
  • Storage Transfer Service for on-premises data
    • is designed for large-scale transfers (up to petabytes of data, billions of files).
    • supports full copies or incremental copies
    • can be setup by installing on-premises software (known as agents) onto computers in the data center.
  • has a simple, managed graphical user interface; even non-technically savvy users (after setup) can use it to move data.
  • provides robust error-reporting and a record of all files and objects that are moved.
  • supports executing recurring transfers on a schedule.

Transfer Appliance

  • Transfer Appliance is an excellent option for performing large-scale transfers, especially when a fast network connection is unavailable, it’s too costly to acquire more bandwidth or its one-time transfer
  • Expected turnaround time for a network appliance to be shipped, loaded with the data, shipped back, and rehydrated on Google Cloud is 50 days.
  • Consider Transfer Appliance, if the online transfer timeframe is calculated to be substantially more than this timeframe.
  • Transfer Appliance requires the ability to receive and ship back the Google-owned hardware.
  • Transfer Appliance is available only in certain countries.

BigQuery Data Transfer Service

  • BigQuery Data Transfer Service automates data movement into BigQuery on a scheduled, managed basis
  • After a data transfer is configured, the BigQuery Data Transfer Service automatically loads data into BigQuery on a regular basis.
  • BigQuery Data Transfer Service can also initiate data backfills to recover from any outages or gaps.
  • BigQuery Data Transfer Service can only sink data to BigQuery and cannot be used to transfer data out of BigQuery.
  • BigQuery Data Transfer Service supports loading data from the following data sources:
    • Google Software as a Service (SaaS) apps
    • Campaign Manager
    • Cloud Storage
    • Google Ad Manager
    • Google Ads
    • Google Merchant Center (beta)
    • Google Play
    • Search Ads 360 (beta)
    • YouTube Channel reports
    • YouTube Content Owner reports
    • External cloud storage providers
      • Amazon S3
    • Data warehouses
      • Teradata
      • Amazon Redshift

Transfer Data vs Speed Comparison

Data Migration Speeds

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. A company wants to connect cloud applications to an Oracle database in its data center. Requirements are a maximum of 9 Gbps of data and a Service Level Agreement (SLA) of 99%. Which option best suits the requirements?
    1. Implement a high-throughput Cloud VPN connection
    2. Cloud Router with VPN
    3. Dedicated Interconnect
    4. Partner Interconnect
  2. An organization wishes to automate data movement from Software as a Service (SaaS) applications such as Google Ads and Google Ad Manager on a scheduled, managed basis. This data is further needed for analytics and generate reports. How can the process be automated?
    1. Use Storage Transfer Service to move the data to Cloud Storage
    2. Use Storage Transfer Service to move the data to BigQuery
    3. Use BigQuery Data Transfer Service to move the data to BigQuery
    4. Use Transfer Appliance to move the data to Cloud Storage
  3. Your company’s migration team needs to transfer 1PB of data to Google Cloud. The network speed between the on-premises data center and Google Cloud is 100Mbps.
    The migration activity has a timeframe of 6 months. What is the efficient way to transfer the data?

    1. Use BigQuery Data Transfer Service to transfer the data to Cloud Storage
    2. Expose the data as a public URL and Storage Transfer Service to transfer it
    3. Use Transfer appliance to transfer the data to Cloud Storage
    4. Use gsutil command to transfer the data to Cloud Storage
  4. Your company uses Google Analytics for tracking. You need to export the session and hit data from a Google Analytics 360 reporting view on a scheduled basis into BigQuery for analysis. How can the data be exported?
    1. Configure a scheduler in Google Analytics to convert the Google Analytics data to JSON format, then import directly into BigQuery using bq command line.
    2. Use gsutil to export the Google Analytics data to Cloud Storage, then import into BigQuery and schedule it using Cron.
    3. Import data to BigQuery directly from Google Analytics using Cron
    4. Use BigQuery Data Transfer Service to import the data from Google Analytics

References

 

Google Cloud Networking Services Cheat Sheet

Virtual Private Cloud

  • Virtual Private Cloud (VPC) provides networking functionality for the cloud-based resources and services that is global, scalable, and flexible.
  • VPC networks are global resources, including the associated routes and firewall rules, and are not associated with any particular region or zone.
  • Subnets are regional resources and each subnet defines a range of IP addresses
  • Network firewall rules
    • control the Traffic to and from instances.
    • Rules are implemented on the VMs themselves, so traffic can only be controlled and logged as it leaves or arrives at a VM.
    • Firewall rules are defined to allow or deny traffic and are executed within order with a defined priority
    • Highest priority (lower integer) rule applicable to a target for a given type of traffic takes precedence
  • Resources within a VPC network can communicate with one another by using internal IPv4 addresses, subject to applicable network firewall rules.
  • Private access options for services allow instances with internal IP addresses can communicate with Google APIs and services.
  • Shared VPC to keep a VPC network in a common host project shared with service projects. Authorized IAM members from other projects in the same organization can create resources that use subnets of the Shared VPC network
  • VPC Network Peering allow VPC networks to be connected with other VPC networks in different projects or organizations.
  • VPC networks can be securely connected in hybrid environments by using Cloud VPN or Cloud Interconnect.
  • Primary and Secondary IP address cannot overlap with the on-premises CIDR
  • VPC networks only support IPv4 unicast traffic. They do not support broadcast, multicast, or IPv6 traffic within the network; VMs in the VPC network can only send to IPv4 destinations and only receive traffic from IPv4 sources.
  • VPC Flow Logs records a sample of network flows sent from and received by VM instances, including instances used as GKE nodes.

Cloud Load Balancing

  • Cloud Load Balancing is a fully distributed, software-defined managed load balancing service
  • distributes user traffic across multiple instances of the applications and reduces the risk that the of performance issues for the applications experience by spreading the load
  • provides health checking mechanisms that determine if backends, such as instance groups and zonal network endpoint groups (NEGs), are healthy and properly respond to traffic.
  • supports IPv6 clients with HTTP(S) Load Balancing, SSL Proxy Load Balancing, and TCP Proxy Load Balancing.
  • supports multiple Cloud Load Balancing types
    • Internal HTTP(S) Load Balancing
      • is a proxy-based, regional Layer 7 load balancer that enables running and scaling services behind an internal IP address.
      • supports a regional backend service, which distributes HTTP and HTTPS requests to healthy backends (either instance groups containing CE VMs or NEGs containing GKE containers).
      • supports path based routing
      • preserves the Host header of the original client request and also appends two IP addresses (Client and LB )to the X-Forwarded-For header
      • supports a regional health check that periodically monitors the readiness of the backends.
      • has native support for the WebSocket protocol when using HTTP or HTTPS as the protocol to the backend
    • External HTTP(S) Load Balancing
      • is a global, proxy-based Layer 7 load balancer that enables running and scaling the services worldwide behind a single external IP address
      • distributes HTTP and HTTPS traffic to backends hosted on Compute Engine and GKE
      • offers global (cross-regional) and regional load balancing
      • supports content-based load balancing using URL maps
      • preserves the Host header of the original client request and also appends two IP addresses (Client and LB) to the X-Forwarded-For header
      • supports connection draining on backend services
      • has native support for the WebSocket protocol when using HTTP or HTTPS as the protocol to the backend
      • does not support client certificate-based authentication, also known as mutual TLS authentication.
    • Internal TCP/UDP Load Balancing
      • is a managed, internal, pass-through, regional Layer 4 load balancer that enables running and scaling services behind an internal IP address
      • distributes traffic among VM instances in the same region in a Virtual Private Cloud (VPC) network by using an internal IP address.
      • provides high-performance, pass-through Layer 4 load balancer for TCP or UDP traffic.
      • routes original connections directly from clients to the healthy backends, without any interruption.
      • does not terminate SSL traffic and SSL traffic can be terminated by the backends instead of by the load balancer
      • provides access through VPC Network Peering, Cloud VPN or Cloud Interconnect
      • supports health check that periodically monitors the readiness of the backends.
    • External TCP/UDP Network Load Balancing
      • is a managed, external, pass-through, regional Layer 4 load balancer that distributes TCP or UDP traffic originating from the internet to among VM instances in the same region
      • Load-balanced packets are received by backend VMs with their source IP unchanged.
      • Load-balanced connections are terminated by the backend VMs. Responses from the backend VMs go directly to the clients, not back through the load balancer.
      • scope of a network load balancer is regional, not global. A network load balancer cannot span multiple regions. Within a single region, the load balancer services all zones.
      • supports connection tracking table and a configurable consistent hashing algorithm to determine how traffic is distributed to backend VMs.
      • does not support Network endpoint groups (NEGs) as backends
    • External SSL Proxy Load Balancing
      • is a reverse proxy load balancer that distributes SSL traffic coming from the internet to VM instances in the VPC network.
      • with SSL traffic, user SSL (TLS) connections are terminated at the load balancing layer, and then proxied to the closest available backend instances by using either SSL (recommended) or TCP.
      • supports global load balancing service with the Premium Tier
        supports regional load balancing service with the Standard Tier
      • is intended for non-HTTP(S) traffic. For HTTP(S) traffic, GCP recommends using HTTP(S) Load Balancing.
      • supports proxy protocol header to preserve the original source IP addresses of incoming connections to the load balancer
      • does not support client certificate-based authentication, also known as mutual TLS authentication.
    • External TCP Proxy Load Balancing
      • is a reverse proxy load balancer that distributes TCP traffic coming from the internet to VM instances in the VPC network
      • terminates traffic coming over a TCP connection at the load balancing layer, and then forwards to the closest available backend using TCP or SSL
      • use a single IP address for all users worldwide and automatically routes traffic to the backends that are closest to the user
      • supports global load balancing service with the Premium Tier
        supports regional load balancing service with the Standard Tier
      • supports proxy protocol header to preserve the original source IP addresses of incoming connections to the load balancer

Cloud CDN

  • caches website and application content closer to the user
  • uses Google’s global edge network to serve content closer to users, which accelerates the websites and applications.
  • works with external HTTP(S) Load Balancing to deliver content to the users
  • Cloud CDN content can be sourced from various types of backends
    • Instance groups
    • Zonal network endpoint groups (NEGs)
    • Serverless NEGs: One or more App Engine, Cloud Run, or Cloud Functions services
    • Internet NEGs, for endpoints that are outside of Google Cloud (also known as custom origins)
    • Buckets in Cloud Storage
  • Cloud CDN with Google Cloud Armor enforces security policies only for requests for dynamic content, cache misses, or other requests that are destined for the origin server. Cache hits are served even if the downstream Google Cloud Armor security policy would prevent that request from reaching the origin server.
  • recommends
    • using versioning instead of cache invalidation
    • using custom keys to improve cache hit ration
    • cache static content

Cloud VPN

  • securely connects the peer network to the VPC network or two VPCs in GCP through an IPsec VPN connection.
  • encrypts the data as it travels over the internet.
  • only supports site-to-site IPsec VPN connectivity and not client-to-gateway scenarios
  • allows users to access private RFC1918 addresses on resources in the VPC from on-prem computers also using private RFC1918 addresses.
  • can be used with Private Google Access for on-premises hosts
  • Cloud VPN HA
    • provides a high-available and secure connection between the on-premises and the VPC network through an IPsec VPN connection in a single region
    • provides an SLA of 99.99% service availability, when configured with two interfaces and two external IP addresses.
  • supports up to 3Gbps per tunnel with a maximum of 8 tunnels
  • supports static as well as dynamic routing using Cloud Router
  • supports IKEv1 or IKEv2 using a shared secret

Cloud Interconnect

  • Cloud Interconnect provides two options for extending the on-premises network to the VPC networks in Google Cloud.
  • Dedicated Interconnect (Dedicated connection)
    • provides a direct physical connection between the on-premises network and Google’s network
    • requires your network to physically meet Google’s network in a colocation facility with your own routing equipment
    • supports only dynamic routing
    • supports bandwidth to 10 Gbps minimum to 200 Gbps maximum.
  • Partner Interconnect (Use a service provider)
    • provides connectivity between the on-premises and VPC networks through a supported service provider.
    • supports bandwidth to 50 Mbps minimum to 10 Gbps maximum.
    • provides Layer 2 and Layer 3 connectivity
      • For Layer 2 connections, you must configure and establish a BGP session between the Cloud Routers and on-premises routers for each created VLAN attachment
      • For Layer 3 connections, the service provider establishes a BGP session between the Cloud Routers and their edge routers for each VLAN attachment.
  • Single Interconnect connection does not offer redundancy or high availability and its recommended to
    • use 2 in the same metropolitan area (city) as the existing one, but in a different edge availability domain (metro availability zone).
    • use 4 with 2 connections in two different metropolitan areas (city), and each connection in a different edge availability domain (metro availability zone)
    • Cloud Routers are required one in each Google Cloud region
  • Cloud Interconnect does not encrypt the connection between your network and Google’s network. For additional security, use application-level encryption or your own VPN.
  • Currently, Cloud VPN can’t be used with Dedicated Interconnect.

Cloud Router

  • is a fully distributed, managed service that provides dynamic routing and scales with the network traffic.
  • works with both legacy networks and VPC networks.
  • isn’t supported for Direct Peering or Carrier Peering connections.
  • helps dynamically exchange routes between the Google Cloud networks and the on-premises network.
  • peers with the on-premises VPN gateway or router to provide dynamic routing and exchanges topology information through BGP.
  • Google Cloud recommends creating two Cloud Routers in each region for a Cloud Interconnect for 99.99% availability.
  • supports following dynamic routing mode
    • Regional routing mode – provides visibility to resources only in the defined region.
    • Global routing mode – provides has visibility to resources in all regions

Cloud DNS

  • is a high-performance, resilient, reliable, low-latency, global DNS service that publishes the domain names to the global DNS in a cost-effective way.
  • With Shared VPC, Cloud DNS managed private zone, Cloud DNS peering zone, or Cloud DNS forwarding zone must be created in the host project
  • provides Private Zone which supports DNS services for a GCP project. VPCs in the same project can use the same name servers
  • supports DNS Forwarding for Private Zones, which overrides normal DNS resolution for the specified zones. Queries for the specified zones are forwarded to the listed forwarding targets.
  • supports DNS Peering, which allows sending requests for records that come from one zone’s namespace to another VPC network with GCP
  • supports DNS Outbound Policy, which forwards all DNS requests for a VPC network to the specified server targets. It disables internal DNS for the selected networks.
  • Cloud DNS VPC Name Resolution Order
    • DNS Outbound Server Policy
    • DNS Forwarding Zone
    • DNS Peering
    • Compute Engine internal DNS
    • Public Zones
  • supports DNSSEC, a feature of DNS, that authenticates responses to domain name lookups and protects the domains from spoofing and cache poisoning attacks