Google Cloud Networking Services Cheat Sheet

Virtual Private Cloud

  • Virtual Private Cloud (VPC) provides networking functionality for the cloud-based resources and services that is global, scalable, and flexible.
  • VPC networks are global resources, including the associated routes and firewall rules, and are not associated with any particular region or zone.
  • Subnets are regional resources and each subnet defines a range of IP addresses
  • Network firewall rules
    • control the Traffic to and from instances.
    • Rules are implemented on the VMs themselves, so traffic can only be controlled and logged as it leaves or arrives at a VM.
    • Firewall rules are defined to allow or deny traffic and are executed within order with a defined priority
    • Highest priority (lower integer) rule applicable to a target for a given type of traffic takes precedence
  • Resources within a VPC network can communicate with one another by using internal IPv4 addresses, subject to applicable network firewall rules.
  • Private access options for services allow instances with internal IP addresses can communicate with Google APIs and services.
  • Shared VPC to keep a VPC network in a common host project shared with service projects. Authorized IAM members from other projects in the same organization can create resources that use subnets of the Shared VPC network
  • VPC Network Peering allow VPC networks to be connected with other VPC networks in different projects or organizations.
  • VPC networks can be securely connected in hybrid environments by using Cloud VPN or Cloud Interconnect.
  • Primary and Secondary IP address cannot overlap with the on-premises CIDR
  • VPC networks only support IPv4 unicast traffic. They do not support broadcast, multicast, or IPv6 traffic within the network; VMs in the VPC network can only send to IPv4 destinations and only receive traffic from IPv4 sources.
  • VPC Flow Logs records a sample of network flows sent from and received by VM instances, including instances used as GKE nodes.

Cloud Load Balancing

  • Cloud Load Balancing is a fully distributed, software-defined managed load balancing service
  • distributes user traffic across multiple instances of the applications and reduces the risk that the of performance issues for the applications experience by spreading the load
  • provides health checking mechanisms that determine if backends, such as instance groups and zonal network endpoint groups (NEGs), are healthy and properly respond to traffic.
  • supports IPv6 clients with HTTP(S) Load Balancing, SSL Proxy Load Balancing, and TCP Proxy Load Balancing.
  • supports multiple Cloud Load Balancing types
    • Internal HTTP(S) Load Balancing
      • is a proxy-based, regional Layer 7 load balancer that enables running and scaling services behind an internal IP address.
      • supports a regional backend service, which distributes HTTP and HTTPS requests to healthy backends (either instance groups containing CE VMs or NEGs containing GKE containers).
      • supports path based routing
      • preserves the Host header of the original client request and also appends two IP addresses (Client and LB )to the X-Forwarded-For header
      • supports a regional health check that periodically monitors the readiness of the backends.
      • has native support for the WebSocket protocol when using HTTP or HTTPS as the protocol to the backend
    • External HTTP(S) Load Balancing
      • is a global, proxy-based Layer 7 load balancer that enables running and scaling the services worldwide behind a single external IP address
      • distributes HTTP and HTTPS traffic to backends hosted on Compute Engine and GKE
      • offers global (cross-regional) and regional load balancing
      • supports content-based load balancing using URL maps
      • preserves the Host header of the original client request and also appends two IP addresses (Client and LB) to the X-Forwarded-For header
      • supports connection draining on backend services
      • has native support for the WebSocket protocol when using HTTP or HTTPS as the protocol to the backend
      • does not support client certificate-based authentication, also known as mutual TLS authentication.
    • Internal TCP/UDP Load Balancing
      • is a managed, internal, pass-through, regional Layer 4 load balancer that enables running and scaling services behind an internal IP address
      • distributes traffic among VM instances in the same region in a Virtual Private Cloud (VPC) network by using an internal IP address.
      • provides high-performance, pass-through Layer 4 load balancer for TCP or UDP traffic.
      • routes original connections directly from clients to the healthy backends, without any interruption.
      • does not terminate SSL traffic and SSL traffic can be terminated by the backends instead of by the load balancer
      • provides access through VPC Network Peering, Cloud VPN or Cloud Interconnect
      • supports health check that periodically monitors the readiness of the backends.
    • External TCP/UDP Network Load Balancing
      • is a managed, external, pass-through, regional Layer 4 load balancer that distributes TCP or UDP traffic originating from the internet to among VM instances in the same region
      • Load-balanced packets are received by backend VMs with their source IP unchanged.
      • Load-balanced connections are terminated by the backend VMs. Responses from the backend VMs go directly to the clients, not back through the load balancer.
      • scope of a network load balancer is regional, not global. A network load balancer cannot span multiple regions. Within a single region, the load balancer services all zones.
      • supports connection tracking table and a configurable consistent hashing algorithm to determine how traffic is distributed to backend VMs.
      • does not support Network endpoint groups (NEGs) as backends
    • External SSL Proxy Load Balancing
      • is a reverse proxy load balancer that distributes SSL traffic coming from the internet to VM instances in the VPC network.
      • with SSL traffic, user SSL (TLS) connections are terminated at the load balancing layer, and then proxied to the closest available backend instances by using either SSL (recommended) or TCP.
      • supports global load balancing service with the Premium Tier
        supports regional load balancing service with the Standard Tier
      • is intended for non-HTTP(S) traffic. For HTTP(S) traffic, GCP recommends using HTTP(S) Load Balancing.
      • supports proxy protocol header to preserve the original source IP addresses of incoming connections to the load balancer
      • does not support client certificate-based authentication, also known as mutual TLS authentication.
    • External TCP Proxy Load Balancing
      • is a reverse proxy load balancer that distributes TCP traffic coming from the internet to VM instances in the VPC network
      • terminates traffic coming over a TCP connection at the load balancing layer, and then forwards to the closest available backend using TCP or SSL
      • use a single IP address for all users worldwide and automatically routes traffic to the backends that are closest to the user
      • supports global load balancing service with the Premium Tier
        supports regional load balancing service with the Standard Tier
      • supports proxy protocol header to preserve the original source IP addresses of incoming connections to the load balancer

Cloud CDN

  • caches website and application content closer to the user
  • uses Google’s global edge network to serve content closer to users, which accelerates the websites and applications.
  • works with external HTTP(S) Load Balancing to deliver content to the users
  • Cloud CDN content can be sourced from various types of backends
    • Instance groups
    • Zonal network endpoint groups (NEGs)
    • Serverless NEGs: One or more App Engine, Cloud Run, or Cloud Functions services
    • Internet NEGs, for endpoints that are outside of Google Cloud (also known as custom origins)
    • Buckets in Cloud Storage
  • Cloud CDN with Google Cloud Armor enforces security policies only for requests for dynamic content, cache misses, or other requests that are destined for the origin server. Cache hits are served even if the downstream Google Cloud Armor security policy would prevent that request from reaching the origin server.
  • recommends
    • using versioning instead of cache invalidation
    • using custom keys to improve cache hit ration
    • cache static content

Cloud VPN

  • securely connects the peer network to the VPC network or two VPCs in GCP through an IPsec VPN connection.
  • encrypts the data as it travels over the internet.
  • only supports site-to-site IPsec VPN connectivity and not client-to-gateway scenarios
  • allows users to access private RFC1918 addresses on resources in the VPC from on-prem computers also using private RFC1918 addresses.
  • can be used with Private Google Access for on-premises hosts
  • Cloud VPN HA
    • provides a high-available and secure connection between the on-premises and the VPC network through an IPsec VPN connection in a single region
    • provides an SLA of 99.99% service availability, when configured with two interfaces and two external IP addresses.
  • supports up to 3Gbps per tunnel with a maximum of 8 tunnels
  • supports static as well as dynamic routing using Cloud Router
  • supports IKEv1 or IKEv2 using a shared secret

Cloud Interconnect

  • Cloud Interconnect provides two options for extending the on-premises network to the VPC networks in Google Cloud.
  • Dedicated Interconnect (Dedicated connection)
    • provides a direct physical connection between the on-premises network and Google’s network
    • requires your network to physically meet Google’s network in a colocation facility with your own routing equipment
    • supports only dynamic routing
    • supports bandwidth to 10 Gbps minimum to 200 Gbps maximum.
  • Partner Interconnect (Use a service provider)
    • provides connectivity between the on-premises and VPC networks through a supported service provider.
    • supports bandwidth to 50 Mbps minimum to 10 Gbps maximum.
    • provides Layer 2 and Layer 3 connectivity
      • For Layer 2 connections, you must configure and establish a BGP session between the Cloud Routers and on-premises routers for each created VLAN attachment
      • For Layer 3 connections, the service provider establishes a BGP session between the Cloud Routers and their edge routers for each VLAN attachment.
  • Single Interconnect connection does not offer redundancy or high availability and its recommended to
    • use 2 in the same metropolitan area (city) as the existing one, but in a different edge availability domain (metro availability zone).
    • use 4 with 2 connections in two different metropolitan areas (city), and each connection in a different edge availability domain (metro availability zone)
    • Cloud Routers are required one in each Google Cloud region
  • Cloud Interconnect does not encrypt the connection between your network and Google’s network. For additional security, use application-level encryption or your own VPN.
  • Currently, Cloud VPN can’t be used with Dedicated Interconnect.

Cloud Router

  • is a fully distributed, managed service that provides dynamic routing and scales with the network traffic.
  • works with both legacy networks and VPC networks.
  • isn’t supported for Direct Peering or Carrier Peering connections.
  • helps dynamically exchange routes between the Google Cloud networks and the on-premises network.
  • peers with the on-premises VPN gateway or router to provide dynamic routing and exchanges topology information through BGP.
  • Google Cloud recommends creating two Cloud Routers in each region for a Cloud Interconnect for 99.99% availability.
  • supports following dynamic routing mode
    • Regional routing mode – provides visibility to resources only in the defined region.
    • Global routing mode – provides has visibility to resources in all regions

Cloud DNS

  • is a high-performance, resilient, reliable, low-latency, global DNS service that publishes the domain names to the global DNS in a cost-effective way.
  • With Shared VPC, Cloud DNS managed private zone, Cloud DNS peering zone, or Cloud DNS forwarding zone must be created in the host project
  • provides Private Zone which supports DNS services for a GCP project. VPCs in the same project can use the same name servers
  • supports DNS Forwarding for Private Zones, which overrides normal DNS resolution for the specified zones. Queries for the specified zones are forwarded to the listed forwarding targets.
  • supports DNS Peering, which allows sending requests for records that come from one zone’s namespace to another VPC network with GCP
  • supports DNS Outbound Policy, which forwards all DNS requests for a VPC network to the specified server targets. It disables internal DNS for the selected networks.
  • Cloud DNS VPC Name Resolution Order
    • DNS Outbound Server Policy
    • DNS Forwarding Zone
    • DNS Peering
    • Compute Engine internal DNS
    • Public Zones
  • supports DNSSEC, a feature of DNS, that authenticates responses to domain name lookups and protects the domains from spoofing and cache poisoning attacks

Google Cloud Load Balancing

Google Cloud Load Balancing

  • Cloud Load Balancing distributes user traffic across multiple instances of the applications and reduces the risk that the of performance issues for the applications experience by spreading the load
  • Cloud Load Balancing helps serve content as close as possible to the users on a system that can respond to over one million queries per second.
  • Cloud Load Balancing is a fully distributed, software-defined managed service. It isn’t hardware-based and there is no need to manage a physical load balancing infrastructure.

Cloud Load Balancing Features

  • External load balancing
    • for internet based applications
    • requires Premium Tier of Network Service Tiers
    • Types
      • External HTTP/S Load Balancing
      • SSL Proxy Load Balancing
      • TCP Proxy Load Balancing
      • External TCP/UDP Network Load Balancing
  • Internal load balancing
    • for internal clients inside of Google Cloud
    • can use Standard Tier
    • Types
      • Internal HTTP/S Load Balancing
      • Internal TCP/UDP Network Load Balancing
  • Regional load balancing
    • for single region applications.
    • supports only IPv4 termination.
    • Types
      • Internal HTTP/S Load Balancing
      • External TCP/UDP Network Load Balancing
      • Internal TCP/UDP Network Load Balancing
      • External HTTP/S Load Balancing (Standard Tier)
      • SSL Proxy Load Balancing (Standard Tier)
      • TCP Proxy Load Balancing (Standard Tier)
  • Global load balancing
    • for globally distributed applications
    • provides access by using a single anycast IP address
    • supports IPv4 and IPv6 termination.
    • Types
      • External HTTP/S Load Balancing (Premium Tier)
      • SSL Proxy Load Balancing (Premium Tier)
      • TCP Proxy Load Balancing (Premium Tier)

Pass-through vs Proxy-based load balancing

  • Proxy-based load balancing
    • acts as a proxy performing address and port translation and terminating the request before forwarding to the backend service
    • clients and backends interact with the load balancer
    • original client IP, port and protocol is forwarded using x-forwarded-for headers
    • automatically all proxy-based external load balancers inherit DDoS protection from Google Front Ends (GFEs)
    • Google Cloud Armor can be configured for external HTTP(S) load balancers
    • Types
      • Internal HTTP/S Load Balancing
      • External HTTP/S Load Balancing
      • SSL Proxy Load Balancing
      • TCP Proxy Load Balancing
  • Pass-through load balancing
    • does not modify the request or headers and passes to unchanged to the underlying backend
    • Types
      • External TCP/UDP Network Load Balancing
      • Internal TCP/UDP Network Load Balancing

Layer 4 vs Layer 7

  • Layer 4-based load balancing
    • directs traffic based on data from network and transport layer protocols, such as IP address and TCP or UDP port
  • Layer 7-based load balancing
    • adds content-based routing decisions based on attributes, such as the HTTP header and the URI
  • Supports various traffic types including HTTP(S), TCP, UDP
  • For HTTP and HTTPS traffic, use:
    • External HTTP(S) Load Balancing
    • Internal HTTP(S) Load Balancing
  • For TCP traffic, use:
    • TCP Proxy Load Balancing
    • Network Load Balancing
    • Internal TCP/UDP Load Balancing
  • For UDP traffic, use:
    • Network Load Balancing
    • Internal TCP/UDP Load Balancing

Google Cloud Load Balancing Types

Refer blog post @ Google Cloud Load Balancing Types

Load Balancing Components

Backend services

  • A backend is a group of endpoints that receive traffic from a Google Cloud load balancer, a Traffic Director-configured Envoy proxy, or a proxyless gRPC client.
  • Google Cloud supports several types of backends:
    • Instance group containing virtual machine (VM) instances.
    • Zonal NEG
    • Serverless NEG
    • Internet NEG
    • Cloud Storage bucket
  • A backend service is either global or regional in scope.

Forwarding Rules

  • A forwarding rule and its corresponding IP address represent the frontend configuration of a Google Cloud load balancer.

Health Checks

  • Google Cloud provides health checking mechanisms that determine if backends, such as instance groups and zonal network endpoint groups (NEGs), are healthy and properly respond to traffic.
  • Google Cloud provides global and regional health check systems that connect to backends on a configurable, periodic basis.
  • Each connection attempt is called a probe, and each health check system is called a prober. Google Cloud records the success or failure of each probe
  • Google Cloud computes an overall health state for each backend in the load balancer or Traffic Director based on a configurable number of sequential successful or failed probes.
    • Backends that respond successfully for the configured number of times are considered healthy.
    • Backends that fail to respond successfully for a separate number of times are unhealthy.

IPv6 termination

  • Google Cloud supports IPv6 clients with HTTP(S) Load Balancing, SSL Proxy Load Balancing, and TCP Proxy Load Balancing.
  • Load balancer accepts IPv6 connections from the users, and then proxies those connections to the backends.

SSL Certificates

  • Load balancer must have an SSL certificate and the certificate’s corresponding private key.
  • Communication between the client and the load balancer remains private – illegible to any third party that doesn’t have this private key.
  • Google Cloud uses SSL certificates to provide privacy and security from a client to a load balancer. To achieve this, the
  • Allows multiple SSL certificates when serving from multiple domains using the same load balancer IP address and port, and a different SSL certificate for each domain is needed

SSL Policies

  • SSL policies provide the ability to control the features of SSL that the SSL proxy load balancer or external HTTP(S) load balancer negotiates with clients
  • HTTP(S) Load Balancing and SSL Proxy Load Balancing uses a set of SSL features that provides good security and wide compatibility.
  • SSL policies help control the features of SSL like SSL versions and ciphers that the load balancer negotiates with clients.

URL Maps

  • URL map helps to direct requests to a destination based on defined rules
  • When a request arrives at the load balancer, the load balancer routes the request to a particular backend service or backend bucket based on configurations in a URL map.

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.

Google Cloud Load Balancing Types

Google Cloud Load Balancing Types

Google Cloud Load Balancer Comparison

Internal HTTP(S) Load Balancing

  • is a proxy-based, regional Layer 7 load balancer that enables running and scaling services behind an internal IP address.
  • distributes HTTP and HTTPS traffic to backends hosted on Compute Engine and GKE
  • is accessible only in the chosen region of the Virtual Private Cloud (VPC) network on an internal IP address.
  • enables rich traffic control capabilities based on HTTP(S) parameters.
  • is a managed service based on the open source Envoy proxy.
  • needs one proxy-only subnet in each region of a VPC network where internal HTTP(S) load balancers is used. All the internal HTTP(S) load balancers in a region and VPC network share the same proxy-only subnet because all internal HTTP(S) load balancers in the region and VPC network share a pool of Envoy proxies.
  • supports path based routing
  • preserves the Host header of the original client request and also appends two IP addresses (Client and LB )to the X-Forwarded-For header
  • supports a regional backend service, which distributes requests to healthy backends (either instance groups containing Compute Engine VMs or NEGs containing GKE containers).
  • supports a regional health check that periodically monitors the readiness of the backends. This reduces the risk that requests might be sent to backends that can’t service the request.
  • if a backend becomes unhealthy, traffic is automatically redirected to healthy backends within the same region.
  • has native support for the WebSocket protocol when using HTTP or HTTPS as the protocol to the backend
  • accepts only TLS 1.0, 1.1, 1.2, and 1.3 when terminating client SSL requests.
  • isn’t compatible with the following features:
    • Cloud CDN
    • Google Cloud Armor
    • Cloud Storage buckets
    • Google-managed SSL certificates
    • SSL policies

External HTTP(S) Load Balancing

  • is a global, proxy-based Layer 7 load balancer that enables running and scaling the services worldwide behind a single external IP address.
  • distributes HTTP and HTTPS traffic to backends hosted on Compute Engine and GKE
  • is implemented on Google Front Ends (GFEs). GFEs are distributed globally and operate together using Google’s global network and control plane.
    • In the Premium Tier, GFEs offer global load balancing
    • With Standard Tier, the load balancing is handled regionally.
  • provides cross-regional or location-based load balancing, directing traffic to the closest healthy backend that has the capacity and terminating HTTP(S) traffic as close as possible to your users.
  • supports content-based load balancing using URL maps to select a backend service based on the requested host name, request path, or both.
  • supports the following backend types:
    • Instance groups
    • Zonal network endpoint groups (NEGs)
    • Serverless NEGs: One or more App Engine, Cloud Run, or Cloud Functions services
    • Internet NEGs, for endpoints that are outside of Google Cloud (also known as custom origins)
    • Buckets in Cloud Storage
  • preserves the Host header of the original client request and also appends two IP addresses (Client and LB )to the X-Forwarded-For header
  • supports Cloud Load Balancing Autoscaler, which allows users to perform autoscaling on the instance groups in a backend service.
  • supports connection draining on backend services to ensure minimal interruption to the users when an instance that is serving traffic is terminated, removed manually, or removed by an autoscaler.
  • supports Session affinity as a best-effort attempt to send requests from a particular client to the same backend for as long as the backend is healthy and has the capacity, according to the configured balancing mode. It offers three types of session affinity:
    • NONE. Session affinity is not set for the load balancer.
    • Client IP affinity sends requests from the same client IP address to the same backend.
    • Generated cookie affinity sets a client cookie when the first request is made, and then sends requests with that cookie to the same backend.
  • if a backend becomes unhealthy, traffic is automatically redirected to healthy backends within the same region.
  • has native support for the WebSocket protocol when using HTTP or HTTPS as the protocol to the backend
  • accepts only TLS 1.0, 1.1, 1.2, and 1.3 when terminating client SSL requests.
  • does not support client certificate-based authentication, also known as mutual TLS authentication.

Internal TCP/UDP Load Balancing

  • is a managed, internal, pass-through, regional Layer 4 load balancer that enables running and scaling services behind an internal IP address.
  • distributes traffic among VM instances in the same region in a VPC network by using an internal IP address.
  • provides a high-performance, pass-through Layer 4 load balancer for TCP or UDP traffic.
  • routes original connections directly from clients to the healthy backends, without any interruption.
  • Responses from the healthy backend VMs go directly to the clients, not back through the load balancer. TCP responses use direct server return.
  • does not terminate SSL traffic and SSL traffic can be terminated by the backends instead of by the load balancer
  • Unlike proxy load balancer, it doesn’t terminate connections from clients and then opens new connections to backends.
  • provides access through VPC Network Peering, Cloud VPN or Cloud Interconnect
  • supports Session affinity as a best-effort attempt for TCP traffic to send requests from a particular client to the same backend for as long as the backend is healthy and has the capacity, according to the configured balancing mode. It offers three types of session affinity:
    • None : default setting, effectively same as Client IP, protocol, and port.
    • Client IP : Directs a particular client’s requests to the same backend VM based on a hash created from the client’s IP address and the destination IP address.
    • Client IP and protocol : Directs a particular client’s requests to the same backend VM based on a hash created from three pieces of information: the client’s IP address, the destination IP address, and the load balancer’s protocol (TCP or UDP).
    • Client IP, protocol, and port : Directs a particular client’s requests to the same backend VM based on a hash created from these five pieces of information:
      • Source IP address of the client sending the request
      • Source port of the client sending the request
      • Destination IP address
      • Destination port
      • Protocol (TCP or UDP)
  • UDP protocol doesn’t support sessions, session affinity doesn’t affect UDP traffic.
  • supports health check that periodically monitors the readiness of the backends. If a backend becomes unhealthy, traffic is automatically redirected to healthy backends within the same region.
  • supports HTTP(S), HTTP2, TCP, and SSL as health check protocols; the protocol of the health check does not have to match the protocol of the load balancer:
  • does not offer a health check that uses the UDP protocol, but can be done using TCP-based health checks
  • does not support Network endpoint groups (NEGs) as backends
  • support some backends to be configured as failover backends. These backends are only used when the number of healthy VMs in the primary backend instance groups has fallen below a configurable threshold.

External TCP/UDP Network Load Balancing

  • is a managed, external, pass-through, regional Layer 4 load balancer that distributes TCP or UDP traffic originating from the internet to among virtual machine (VM) instances in the same region
  • are not proxies, but pass-through
    • Load-balanced packets are received by backend VMs with their source IP unchanged.
    • Load-balanced connections are terminated by the backend VMs.
    • Responses from the backend VMs go directly to the clients, not back through the load balancer.
    • TCP responses use direct server return.
  • scope of a network load balancer is regional, not global. A network load balancer cannot span multiple regions. Within a single region, the load balancer services all zones.
  • distributes connections among backend VMs contained within managed or unmanaged instance groups.
  • supports regional health check that periodically monitors the readiness of the backends. If a backend becomes unhealthy, traffic is automatically redirected to healthy backends within the same region.
  • supports HTTP(S), HTTP2, TCP, and SSL as health check protocols; the protocol of the health check does not have to match the protocol of the load balancer:
  • does not offer a health check that uses the UDP protocol, but can be done using TCP-based health checks
  • supports connection tracking table and a configurable consistent hashing algorithm to determine how traffic is distributed to backend VMs.
  • supports Session affinity as a best-effort attempt for TCP traffic to send requests from a particular client to the same backend for as long as the backend is healthy and has the capacity, according to the configured balancing mode. It offers three types of session affinity:
    • None : default setting, effectively same as Client IP, protocol, and port.
    • Client IP : Directs a particular client’s requests to the same backend VM based on a hash created from the client’s IP address and the destination IP address.
    • Client IP and protocol : Directs a particular client’s requests to the same backend VM based on a hash created from three pieces of information: the client’s IP address, the destination IP address, and the load balancer’s protocol (TCP or UDP).
    • Client IP, protocol, and port : Directs a particular client’s requests to the same backend VM based on a hash created from these five pieces of information:
      • Source IP address of the client sending the request
      • Source port of the client sending the request
      • Destination IP address
      • Destination port
      • Protocol (TCP or UDP)
  • UDP protocol doesn’t support sessions, session affinity doesn’t affect UDP traffic.
  • supports connection draining which allows established TCP connections to persist until the VM no longer exists. If connection draining is disabled, established TCP connections are terminated as quickly as possible.
  • supports only self-managed SSL certificates
  • does not support Network endpoint groups (NEGs) as backends

External SSL Proxy Load Balancing

  • is a reverse proxy, layer 4, an external load balancer that distributes SSL traffic coming from the internet to VM instances in the VPC network.
  • with SSL traffic, supports SSL offload where user SSL (TLS) connections are terminated at the load balancing layer, and then proxied to the closest available backend instances by using either SSL (recommended) or TCP.
  • supports global load balancing service with the Premium Tier
  • supports regional load balancing service with the Standard Tier
  • is intended for non-HTTP(S) traffic. For HTTP(S) traffic, GCP recommends using HTTP(S) Load Balancing.
  • supports proxy protocol header to preserve the original source IP addresses of incoming connections to the load balancer
  • performs traffic distribution based on the balancing mode and the hashing method selected to choose a backend (session affinity).
  • supports two types of balancing mode :
    • CONNECTION : the load is spread based on how many concurrent connections the backend can handle.
    • UTILIZATION: the load is spread based on the utilization of instances in an instance group.
  • supports Session Affinity and offers client IP affinity, which forwards all requests from the same client IP address to the same backend.
  • supports a single backend service resource. Changes to the backend service are not instantaneous and would take several minutes for changes to propagate to Google Front Ends (GFEs).
  • do not support client certificate-based authentication, also known as mutual TLS authentication.

External TCP Proxy Load Balancing

  • is a reverse proxy, external, layer 4 load balancer that distributes TCP traffic coming from the internet to VM instances in the VPC network
  • terminates traffic coming over a TCP connection at the load balancing layer, and then forwards to the closest available backend using TCP or SSL
  • use a single IP address for all users worldwide and automatically routes traffic to the backends that are closest to the user
  • supports global load balancing service with the Premium Tier
  • supports regional load balancing service with the Standard Tier
  • performs traffic distribution based on the balancing mode and the hashing method selected to choose a backend (session affinity).
  • supports proxy protocol header to preserve the original source IP addresses of incoming connections to the load balancer
  • supports two types of balancing mode
    • CONNECTION : the load is spread based on how many concurrent connections the backend can handle.
    • UTILIZATION: the load is spread based on the utilization of instances in an instance group.
  • supports Session Affinity and offers client IP affinity, which forwards all requests from the same client IP address to the same backend.

GCP Cloud Load Balancing Decision Tree

Google Cloud Load Balancer Decision Tree

GCP Certification Exam Practice Questions

  • Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
  • GCP services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
  • GCP exam questions are not updated to keep up the pace with GCP updates, so even if the underlying feature has changed the question might not be updated
  • Open to further feedback, discussion and correction.
  1. Your development team has asked you to set up an external TCP load balancer with SSL offload. Which load balancer should you use?
    1. SSL proxy
    2. HTTP load balancer
    3. TCP proxy
    4. HTTPS load balancer
  2. You have an instance group that you want to load balance. You want the load balancer to terminate the client SSL session. The instance group is used to serve a public web application over HTTPS. You want to follow Google-recommended practices. What should you do?
    1. Configure a HTTP(S) load balancer.
    2. Configure an internal TCP load balancer.
    3. Configure an external SSL proxy load balancer.
    4. Configure an external TCP proxy load balancer.
  3. Your development team has asked you to set up load balancer with SSL termination. The website would be using HTTPS protocol. Which load balancer should you use?
    1. SSL proxy
    2. HTTP load balancer
    3. TCP proxy
    4. HTTPS load balancer
  4. You have an application that receives SSL-encrypted TCP traffic on port 443. Clients for this application are located all over the world. You want to minimize latency for the clients. Which load balancing option should you use?
    1. HTTPS Load Balancer
    2. Network Load Balancer
    3. SSL Proxy Load Balancer
    4. Internal TCP/UDP Load Balancer. Add a firewall rule allowing ingress traffic from 0.0.0.0/0 on the target instances.