
Professional Cloud Network Engineer - Google Cloud Certified Exam Questions
Total Questions
Last Updated
1st Try Guaranteed

Experts Verified
Question 11 Multiple Choice
Your task involves configuring your organization's Google Cloud environment to connect to your on-premises network, which lacks support for Border Gateway Protocol (BGP). The on-premises network comprises 30 CIDR ranges that must be accessible from Google Cloud. Additionally, your VPN gateway generates a unique child security association (SA) per CIDR. It's imperative to ensure the reachability of all 30 CIDR ranges from Google Cloud, following Google's recommended practices.
Which two methods can achieve this goal? (Select two options.)
Explanation

Click "Show Answer" to see the explanation here
The two approaches that line up with Google’s guidance are I and III.
Why option I (single route-based tunnel) works
Google recommends a broad, single-CIDR traffic selector—
0.0.0.0/0on each side—when your peer device creates a separate child-SA for every individual subnet. Creating a route-based Classic VPN tunnel automatically uses those broad selectors. You then add 30 static routes in the VPC that point to the tunnel so every on-prem CIDR is reachable. This avoids the incompatibility caused by a peer that insists on a unique child-SA per CIDR cloud.google.com.
Why option III (many policy-based tunnels, one CIDR each, distinct peer IPs) works
If your on-prem VPN can’t accept the broad
0.0.0.0/0selectors, Google’s fallback is to create multiple policy-based tunnels, each with exactly one local CIDR and one remote CIDR.Crucially, Cloud VPN requires that each of those tunnels terminate on a different peer public IP address when they share the same Cloud VPN gateway—exactly what option III states cloud.google.com.
Why the other options fail
II – A single policy-based tunnel carrying 30 remote CIDRs won’t work because your peer creates one child-SA per CIDR. Google explicitly calls such devices incompatible with a multi-CIDR traffic selector cloud.google.com.
IV – Splitting the 30 CIDRs across three tunnels (10 each) still leaves multiple CIDRs per tunnel, so the same incompatibility arises.
V – One CIDR per tunnel is fine, but using the same peer IP for every tunnel breaks the requirement that each tunnel on the same Cloud VPN gateway must target a unique peer IP address cloud.google.com.
In short:
Option I – one route-based tunnel with broad selectors + 30 static routes.
Option III – 30 one-to-one policy-based tunnels, each pointing at a different peer IP.
Either design satisfies Google’s best-practice guidance for a peer that can’t run BGP and insists on a separate child-SA for every subnet.
Explanation
The two approaches that line up with Google’s guidance are I and III.
Why option I (single route-based tunnel) works
Google recommends a broad, single-CIDR traffic selector—
0.0.0.0/0on each side—when your peer device creates a separate child-SA for every individual subnet. Creating a route-based Classic VPN tunnel automatically uses those broad selectors. You then add 30 static routes in the VPC that point to the tunnel so every on-prem CIDR is reachable. This avoids the incompatibility caused by a peer that insists on a unique child-SA per CIDR cloud.google.com.
Why option III (many policy-based tunnels, one CIDR each, distinct peer IPs) works
If your on-prem VPN can’t accept the broad
0.0.0.0/0selectors, Google’s fallback is to create multiple policy-based tunnels, each with exactly one local CIDR and one remote CIDR.Crucially, Cloud VPN requires that each of those tunnels terminate on a different peer public IP address when they share the same Cloud VPN gateway—exactly what option III states cloud.google.com.
Why the other options fail
II – A single policy-based tunnel carrying 30 remote CIDRs won’t work because your peer creates one child-SA per CIDR. Google explicitly calls such devices incompatible with a multi-CIDR traffic selector cloud.google.com.
IV – Splitting the 30 CIDRs across three tunnels (10 each) still leaves multiple CIDRs per tunnel, so the same incompatibility arises.
V – One CIDR per tunnel is fine, but using the same peer IP for every tunnel breaks the requirement that each tunnel on the same Cloud VPN gateway must target a unique peer IP address cloud.google.com.
In short:
Option I – one route-based tunnel with broad selectors + 30 static routes.
Option III – 30 one-to-one policy-based tunnels, each pointing at a different peer IP.
Either design satisfies Google’s best-practice guidance for a peer that can’t run BGP and insists on a separate child-SA for every subnet.
Question 12 Single Choice
You are architecting a new application where the backend services are internally accessible on port 800. This application must be externally available over both IPv4 and IPv6, using TCP on port 700. The solution must be designed with high availability in mind.
Which configuration should you choose?
Explanation

Click "Show Answer" to see the explanation here
Choose D. Use a TCP Proxy Load Balancer whose backend service points to an instance group with (at least) two VM instances.
Why this is the only option that satisfies all requirements
Port translation (700 → 800).
A proxy Network Load Balancer terminates the client’s TCP session and opens a new one to the backend, so the backend port can differ from the frontend port. cloud.google.comDual-stack (IPv4 + IPv6) exposure.
Global and regional external TCP proxy load balancers natively accept connections over both IPv4 and IPv6. cloud.google.comHigh availability.
A backend service that points to an instance group with two or more VMs (ideally a regional MIG spanning zones) lets the load balancer fail over automatically if one VM or zone becomes unavailable. cloud.google.com
Why the other choices don’t work
A – Network Load Balancer (backend service).
Passthrough network LBs forward packets unchanged; the backend must listen on the same port the client used. They can’t map port 700 to 800. cloud.google.comB – Network Load Balancer (target pool).
Has the same port-translation limitation as A, and target-pool-based NLBs don’t yet support IPv6 front-ends.C – TCP Proxy LB with a single-instance zonal NEG.
Meets the port-translation and IPv6 needs, but a single backend instance in one zone can’t provide high availability.
Therefore, option D is the only configuration that delivers port translation, IPv4 + IPv6 exposure, and high availability.
Explanation
Choose D. Use a TCP Proxy Load Balancer whose backend service points to an instance group with (at least) two VM instances.
Why this is the only option that satisfies all requirements
Port translation (700 → 800).
A proxy Network Load Balancer terminates the client’s TCP session and opens a new one to the backend, so the backend port can differ from the frontend port. cloud.google.comDual-stack (IPv4 + IPv6) exposure.
Global and regional external TCP proxy load balancers natively accept connections over both IPv4 and IPv6. cloud.google.comHigh availability.
A backend service that points to an instance group with two or more VMs (ideally a regional MIG spanning zones) lets the load balancer fail over automatically if one VM or zone becomes unavailable. cloud.google.com
Why the other choices don’t work
A – Network Load Balancer (backend service).
Passthrough network LBs forward packets unchanged; the backend must listen on the same port the client used. They can’t map port 700 to 800. cloud.google.comB – Network Load Balancer (target pool).
Has the same port-translation limitation as A, and target-pool-based NLBs don’t yet support IPv6 front-ends.C – TCP Proxy LB with a single-instance zonal NEG.
Meets the port-translation and IPv6 needs, but a single backend instance in one zone can’t provide high availability.
Therefore, option D is the only configuration that delivers port translation, IPv4 + IPv6 exposure, and high availability.
Question 13 Single Choice
You've recently deployed Compute Engine instances in the us-west1 and us-east1 regions within a Virtual Private Cloud (VPC) utilizing default routing configurations. However, your company's security policy strictly prohibits virtual machines (VMs) from having public IP addresses. Your objective is to enable these instances to fetch updates from the internet while safeguarding against external access. What is the most appropriate course of action?
Explanation

Click "Show Answer" to see the explanation here
Best choice:
I. Establish a Cloud NAT gateway and Cloud Router in both the us‑west1 and us‑east1 regions.
Why I is correct
No external IPs on VMs
Cloud NAT lets instances without any external IPs initiate outbound connections to the internet by translating their private IPs to a pool of NAT IP addresses you control. Your VMs retain no public IPs, satisfying the security policy Google Cloud.Regional scope of Cloud NAT
Each Cloud NAT gateway is scoped to a single VPC network and region (and attaches to one Cloud Router). To cover both us‑west1 and us‑east1, you must deploy a separate NAT gateway (and corresponding Cloud Router) in each region where your instances run Google Cloud.Protected egress
With regional Cloud NAT in place, all outbound traffic from your instances in that region will flow over the NAT gateway—no direct internet egress from the VMs themselves, and no need for firewall changes beyond allowing the NAT’s ephemeral IP range.
Why the other options aren’t sufficient
II. Single global Cloud NAT gateway and global Cloud Router
Cloud NAT and Cloud Router are regional resources. You cannot create one “global” NAT gateway that covers multiple regions; attempts to do so will fail validation Google Cloud.III. Change VMs to have ephemeral external IPs
Assigning public IPs directly to VMs violates the security policy (“no VM may have a public IP”). It also exposes them to inbound attacks unless additional firewall locks are applied, complicating the design.IV. Create a firewall rule permitting egress to 0.0.0.0/0
Allowing egress alone doesn’t give VMs a path to the internet—without NAT or external IPs, packets will have private source addresses and will be dropped by upstream internet routers. Firewall rules cannot substitute for NAT.
Explanation
Best choice:
I. Establish a Cloud NAT gateway and Cloud Router in both the us‑west1 and us‑east1 regions.
Why I is correct
No external IPs on VMs
Cloud NAT lets instances without any external IPs initiate outbound connections to the internet by translating their private IPs to a pool of NAT IP addresses you control. Your VMs retain no public IPs, satisfying the security policy Google Cloud.Regional scope of Cloud NAT
Each Cloud NAT gateway is scoped to a single VPC network and region (and attaches to one Cloud Router). To cover both us‑west1 and us‑east1, you must deploy a separate NAT gateway (and corresponding Cloud Router) in each region where your instances run Google Cloud.Protected egress
With regional Cloud NAT in place, all outbound traffic from your instances in that region will flow over the NAT gateway—no direct internet egress from the VMs themselves, and no need for firewall changes beyond allowing the NAT’s ephemeral IP range.
Why the other options aren’t sufficient
II. Single global Cloud NAT gateway and global Cloud Router
Cloud NAT and Cloud Router are regional resources. You cannot create one “global” NAT gateway that covers multiple regions; attempts to do so will fail validation Google Cloud.III. Change VMs to have ephemeral external IPs
Assigning public IPs directly to VMs violates the security policy (“no VM may have a public IP”). It also exposes them to inbound attacks unless additional firewall locks are applied, complicating the design.IV. Create a firewall rule permitting egress to 0.0.0.0/0
Allowing egress alone doesn’t give VMs a path to the internet—without NAT or external IPs, packets will have private source addresses and will be dropped by upstream internet routers. Firewall rules cannot substitute for NAT.
Question 14 Single Choice
You need to set up a static route to an on-premises resource behind a Cloud VPN gateway that employs policy-based routing via the gcloud command. Which next hop should you choose?
Explanation

Click "Show Answer" to see the explanation here
The correct next hop for a static route pointing into a policy‑based Cloud VPN tunnel is the VPN tunnel object itself, specified by its name and region. In gcloud syntax you’d use:
- gcloud compute routes create ROUTE_NAME \
- --network YOUR_VPC \
- --destination-range 10.0.0.0/24 \
- --next-hop-vpn-tunnel YOUR_TUNNEL_NAME \
- --next-hop-vpn-tunnel-region YOUR_TUNNEL_REGION
This corresponds to Option III.
Why III is correct
Policy‑based VPN uses static routes whose next hop is the tunnel. When you create a Classic (policy‑based) VPN tunnel via the Console or CLI and supply “Remote network IP ranges,” Google Cloud auto‑generates static routes for those CIDRs with the next hop set to the VPN tunnel object. Google Cloud | fig.io
gcloud’s
--next-hop-vpn-tunnelflag explicitly binds the route to the named tunnel and its region, ensuring that matching packets are forwarded into the IPsec tunnel rather than to the internet gateway or another appliance. fig.io
Why the other options don’t fit
I. Default internet gateway
That would send traffic to the public internet, not into your VPN tunnel.II. IP address of the Cloud VPN gateway
The gateway’s external IP isn’t a valid next hop for VPC static routes; you must reference the tunnel abstraction.IV. IP address of the remote instance
You can’t point a VPC route at an arbitrary on‑prem host IP. The route must target a GCP next‑hop resource (gateway, VPN tunnel, instance, or load balancer), and for VPN you choose the tunnel object.
References
Static routing with Classic VPN: “For each range in Remote network IP ranges, Google Cloud creates a custom static route whose destination (prefix) is the range’s CIDR, and whose next hop is the tunnel.” Google Cloud
gcloud
next-hop-vpn-tunnelusage: “--next-hop-vpn-tunnel<NEXT_HOP_VPN_TUNNEL>, The target VPN tunnel that will receive forwarded traffic; --next-hop-vpn-tunnel-region<NEXT_HOP_VPN_TUNNEL_REGION>” fig.io
Explanation
The correct next hop for a static route pointing into a policy‑based Cloud VPN tunnel is the VPN tunnel object itself, specified by its name and region. In gcloud syntax you’d use:
- gcloud compute routes create ROUTE_NAME \
- --network YOUR_VPC \
- --destination-range 10.0.0.0/24 \
- --next-hop-vpn-tunnel YOUR_TUNNEL_NAME \
- --next-hop-vpn-tunnel-region YOUR_TUNNEL_REGION
This corresponds to Option III.
Why III is correct
Policy‑based VPN uses static routes whose next hop is the tunnel. When you create a Classic (policy‑based) VPN tunnel via the Console or CLI and supply “Remote network IP ranges,” Google Cloud auto‑generates static routes for those CIDRs with the next hop set to the VPN tunnel object. Google Cloud | fig.io
gcloud’s
--next-hop-vpn-tunnelflag explicitly binds the route to the named tunnel and its region, ensuring that matching packets are forwarded into the IPsec tunnel rather than to the internet gateway or another appliance. fig.io
Why the other options don’t fit
I. Default internet gateway
That would send traffic to the public internet, not into your VPN tunnel.II. IP address of the Cloud VPN gateway
The gateway’s external IP isn’t a valid next hop for VPC static routes; you must reference the tunnel abstraction.IV. IP address of the remote instance
You can’t point a VPC route at an arbitrary on‑prem host IP. The route must target a GCP next‑hop resource (gateway, VPN tunnel, instance, or load balancer), and for VPN you choose the tunnel object.
References
Static routing with Classic VPN: “For each range in Remote network IP ranges, Google Cloud creates a custom static route whose destination (prefix) is the range’s CIDR, and whose next hop is the tunnel.” Google Cloud
gcloud
next-hop-vpn-tunnelusage: “--next-hop-vpn-tunnel<NEXT_HOP_VPN_TUNNEL>, The target VPN tunnel that will receive forwarded traffic; --next-hop-vpn-tunnel-region<NEXT_HOP_VPN_TUNNEL_REGION>” fig.io
Question 15 Single Choice
To comply with your organization's security policy, which mandates that all internet-bound traffic returns to your on-premises data center via HA VPN tunnels before accessing the internet, and that virtual machines (VMs) can utilize private Google APIs using private virtual IP addresses 199.36.153.4/30, how should you configure the routes to facilitate these traffic patterns?
Explanation

Click "Show Answer" to see the explanation here
The only option that sends all Internet‑bound traffic back over your HA‑VPN and yet lets your VMs hit the private‑VIP for Google APIs directly is C:
C.
• Have your on‑premises router advertise a BGP route for 0.0.0.0/0 into your Cloud Router (so that dynamic routing sends all “normal” Internet traffic back to on‑prem).
• Create a custom static route for 199.36.153.4/30 with a next‑hop of default‑internet‑gateway and a priority that’s higher (i.e. lower numerical value) than any catch‑all routing (so that lookups for the Google API VIP go out Google’s backbone, not your VPN).
Why this works
Dynamic 0.0.0.0/0 via BGP → HA‑VPN
By advertising the default route from your on‑prem router into Cloud Router, you ensure that every packet whose destination doesn’t match a more‑specific route (i.e. “Internet” traffic) will flow back to your data center via the VPN tunnel Google Cloud Community.Static 199.36.153.4/30 → internet gateway
VMs need to reach Google’s restricted.googleapis.com VIP directly on Google’s network. A static route for199.36.153.4/30with next‑hopdefault-internet-gatewayand a higher priority than your default‑route logic makes sure those packets never detour through on‑prem chou.se.
Why the others fail
A & B both send Internet traffic directly to the internet gateway (instead of via VPN) and mis‑route the Google API VIP back to on‑prem.
D also routes the API VIP over your VPN, defeating Private Google Access.
References
Controlling route order for BGP‑advertised defaults: “You just need to control the order of routes (priority) so that the routes is not suppressed.” Google Cloud Community
Private Google Access hybrid routing: “If the default internet‑gateway route has been overridden, create a custom route for the
199.36.153.4/30range with next‑hopdefault-internet-gatewayand a priority higher than the custom default route.” chou.se
Explanation
The only option that sends all Internet‑bound traffic back over your HA‑VPN and yet lets your VMs hit the private‑VIP for Google APIs directly is C:
C.
• Have your on‑premises router advertise a BGP route for 0.0.0.0/0 into your Cloud Router (so that dynamic routing sends all “normal” Internet traffic back to on‑prem).
• Create a custom static route for 199.36.153.4/30 with a next‑hop of default‑internet‑gateway and a priority that’s higher (i.e. lower numerical value) than any catch‑all routing (so that lookups for the Google API VIP go out Google’s backbone, not your VPN).
Why this works
Dynamic 0.0.0.0/0 via BGP → HA‑VPN
By advertising the default route from your on‑prem router into Cloud Router, you ensure that every packet whose destination doesn’t match a more‑specific route (i.e. “Internet” traffic) will flow back to your data center via the VPN tunnel Google Cloud Community.Static 199.36.153.4/30 → internet gateway
VMs need to reach Google’s restricted.googleapis.com VIP directly on Google’s network. A static route for199.36.153.4/30with next‑hopdefault-internet-gatewayand a higher priority than your default‑route logic makes sure those packets never detour through on‑prem chou.se.
Why the others fail
A & B both send Internet traffic directly to the internet gateway (instead of via VPN) and mis‑route the Google API VIP back to on‑prem.
D also routes the API VIP over your VPN, defeating Private Google Access.
References
Controlling route order for BGP‑advertised defaults: “You just need to control the order of routes (priority) so that the routes is not suppressed.” Google Cloud Community
Private Google Access hybrid routing: “If the default internet‑gateway route has been overridden, create a custom route for the
199.36.153.4/30range with next‑hopdefault-internet-gatewayand a priority higher than the custom default route.” chou.se
Question 16 Single Choice
When deploying a global external TCP load balancing solution and aiming to retain the original layer 3 payload's source IP address, which type of load balancer should you opt for?
Explanation

Click "Show Answer" to see the explanation here
The only Google Cloud load balancer that forwards packets at Layer 4 without proxying them—and thus preserves the original IP in the L3 packet—is the Network (passthrough) Load Balancer.
Correct Choice: II. Network load balancer
Why II is correct
Passthrough at Layer 4. A Network Load Balancer (also called a passthrough Network LB) operates at the transport layer and doesn’t terminate or proxy connections. It forwards the client’s packets unmodified to your backends, so the source IP in the IP header remains intact Google Cloud.
Preserves original source address. Google’s documentation explicitly calls out passthrough Network LBs when you need to forward “original client packets to the backends un‑proxied—for example, if you need the client source IP address to be preserved” Google Cloud.
Why the other options don’t fit
I. HTTP(S) load balancer
Terminates traffic at Layer 7 and re‑issues requests to backends; only headers likeX-Forwarded-Forcarry client IP, the IP header is rewritten.III. Internal load balancer
Is regional and for VPC‑internal traffic only; it’s not an external, global service and likewise proxies at L4 within the VPC.IV. TCP/SSL proxy load balancer
Operates globally, but terminates the client connection at the proxy. Although you can re‑inject the client IP via the PROXY protocol, the load balancer still rewrites the IP header by default, requiring special backend support.
References
Passthrough Network Load Balancer overview: “You need to forward original client packets to the backends un‑proxied—for example, if you need the client source IP address to be preserved.” Google Cloud
Choosing a load balancer: “Passthrough Network Load Balancers preserve client source IP addresses. Passthrough Network Load Balancers also support additional protocols like UDP, ESP, and ICMP.” Google Cloud
Additional References:
https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#proxy-protocol
https://cloud.google.com/load-balancing/docs/choosing-load-balancer#flow_chart\
https://cloud.google.com/load-balancing/docs/tcp#target-proxies
Explanation
The only Google Cloud load balancer that forwards packets at Layer 4 without proxying them—and thus preserves the original IP in the L3 packet—is the Network (passthrough) Load Balancer.
Correct Choice: II. Network load balancer
Why II is correct
Passthrough at Layer 4. A Network Load Balancer (also called a passthrough Network LB) operates at the transport layer and doesn’t terminate or proxy connections. It forwards the client’s packets unmodified to your backends, so the source IP in the IP header remains intact Google Cloud.
Preserves original source address. Google’s documentation explicitly calls out passthrough Network LBs when you need to forward “original client packets to the backends un‑proxied—for example, if you need the client source IP address to be preserved” Google Cloud.
Why the other options don’t fit
I. HTTP(S) load balancer
Terminates traffic at Layer 7 and re‑issues requests to backends; only headers likeX-Forwarded-Forcarry client IP, the IP header is rewritten.III. Internal load balancer
Is regional and for VPC‑internal traffic only; it’s not an external, global service and likewise proxies at L4 within the VPC.IV. TCP/SSL proxy load balancer
Operates globally, but terminates the client connection at the proxy. Although you can re‑inject the client IP via the PROXY protocol, the load balancer still rewrites the IP header by default, requiring special backend support.
References
Passthrough Network Load Balancer overview: “You need to forward original client packets to the backends un‑proxied—for example, if you need the client source IP address to be preserved.” Google Cloud
Choosing a load balancer: “Passthrough Network Load Balancers preserve client source IP addresses. Passthrough Network Load Balancers also support additional protocols like UDP, ESP, and ICMP.” Google Cloud
Additional References:
https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#proxy-protocol
https://cloud.google.com/load-balancing/docs/choosing-load-balancer#flow_chart\
https://cloud.google.com/load-balancing/docs/tcp#target-proxies
Question 17 Multiple Choice
Your organization is collaborating with a partner to deliver a solution for a client. Both entities utilize Google Cloud Platform (GCP). Within the partner's network, there are applications requiring access to certain resources within your company's Virtual Private Cloud (VPC). Notably, there is no CIDR overlap between the VPCs. How can you achieve this connectivity requirement while maintaining security? Select two options.
Explanation

Click "Show Answer" to see the explanation here
Best choices → I and III
I. Establish VPC Network Peering
VPC Peering lets your partner’s VPC and your company’s VPC communicate directly over Google’s private backbone using internal IPs. Peering works across different projects and organizations—as long as there’s no CIDR overlap—and carries no egress charges for peer-to-peer traffic. cloud.google.comIII. Set up Cloud VPN
A site-to-site IPsec VPN provides an encrypted tunnel between the two VPCs. Use Cloud VPN when you need an extra layer of security or if you want independent control over routing (via BGP or static routes). It works seamlessly across organizations. cloud.google.com
Why the other options don’t fit
II. Shared VPC
Shared VPC only shares subnets within the same organization by design. You cannot host a Shared VPC across two separate GCP organizations.IV. Dedicated Interconnect
Interconnects link on-premises networks to Google Cloud, not cloud-to-cloud VPCs. It’s also more complex and costly than Cloud VPN or peering.V. Cloud NAT
Cloud NAT provides outbound internet access for private-IP VMs. It doesn’t create connectivity between two VPC networks.
By combining VPC Peering for direct internal-IP communication with Cloud VPN for encrypted, policy-driven tunnels, you satisfy both connectivity and security requirements with minimal complexity.
Explanation
Best choices → I and III
I. Establish VPC Network Peering
VPC Peering lets your partner’s VPC and your company’s VPC communicate directly over Google’s private backbone using internal IPs. Peering works across different projects and organizations—as long as there’s no CIDR overlap—and carries no egress charges for peer-to-peer traffic. cloud.google.comIII. Set up Cloud VPN
A site-to-site IPsec VPN provides an encrypted tunnel between the two VPCs. Use Cloud VPN when you need an extra layer of security or if you want independent control over routing (via BGP or static routes). It works seamlessly across organizations. cloud.google.com
Why the other options don’t fit
II. Shared VPC
Shared VPC only shares subnets within the same organization by design. You cannot host a Shared VPC across two separate GCP organizations.IV. Dedicated Interconnect
Interconnects link on-premises networks to Google Cloud, not cloud-to-cloud VPCs. It’s also more complex and costly than Cloud VPN or peering.V. Cloud NAT
Cloud NAT provides outbound internet access for private-IP VMs. It doesn’t create connectivity between two VPC networks.
By combining VPC Peering for direct internal-IP communication with Cloud VPN for encrypted, policy-driven tunnels, you satisfy both connectivity and security requirements with minimal complexity.
Question 18 Single Choice
Which option should you select to create a direct connection to Google for accessing Cloud SQL through a public IP address without relying on a third-party service provider?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer: II. Direct Peering
Why Direct Peering is the right choice
Direct path to Google public services. Direct Peering lets you establish a BGP session between your on‑premises edge router and Google’s edge network, giving you a high‑throughput, low‑latency path directly into Google’s public IP space. You can then reach Cloud SQL instances configured with public IP addresses over that private peering link—without ever traversing the open internet or involving a third‑party carrier. Google Cloud
No third‑party involvement. Unlike Carrier Peering, Direct Peering is arranged directly between your network and Google. There’s no intermediary service provider to manage or bill you.
Zero setup or maintenance cost. Google doesn’t charge for Direct Peering itself. You only need to handle your side of the peering session.
Why the other options aren’t suitable
I. Carrier Peering
Requires a third‑party service provider (carrier) to carry your traffic to Google’s edge. The question explicitly calls for no third‑party involvement.III. Dedicated Interconnect
Provides a private link into your VPC’s internal IP space (RFC 1918 addresses), not directly into Google’s public IP space. It also incurs significant monthly commitments and setup complexity.IV. Partner Interconnect
Similar to Dedicated Interconnect in terms of complexity and cost, and still relies on a partner to provision the VLAN attachment. It’s designed for private (RFC 1918) VPC connectivity, not public‑IP access to Cloud SQL.
Explanation
Correct Answer: II. Direct Peering
Why Direct Peering is the right choice
Direct path to Google public services. Direct Peering lets you establish a BGP session between your on‑premises edge router and Google’s edge network, giving you a high‑throughput, low‑latency path directly into Google’s public IP space. You can then reach Cloud SQL instances configured with public IP addresses over that private peering link—without ever traversing the open internet or involving a third‑party carrier. Google Cloud
No third‑party involvement. Unlike Carrier Peering, Direct Peering is arranged directly between your network and Google. There’s no intermediary service provider to manage or bill you.
Zero setup or maintenance cost. Google doesn’t charge for Direct Peering itself. You only need to handle your side of the peering session.
Why the other options aren’t suitable
I. Carrier Peering
Requires a third‑party service provider (carrier) to carry your traffic to Google’s edge. The question explicitly calls for no third‑party involvement.III. Dedicated Interconnect
Provides a private link into your VPC’s internal IP space (RFC 1918 addresses), not directly into Google’s public IP space. It also incurs significant monthly commitments and setup complexity.IV. Partner Interconnect
Similar to Dedicated Interconnect in terms of complexity and cost, and still relies on a partner to provision the VLAN attachment. It’s designed for private (RFC 1918) VPC connectivity, not public‑IP access to Cloud SQL.
Question 19 Single Choice
You've implemented a new internal application offering HTTP and TFTP services to on-premises hosts. To distribute traffic across several Compute Engine instances while ensuring clients remain connected to a specific instance across both services, what session affinity option should you select?
Explanation

Click "Show Answer" to see the explanation here
Correct session-affinity choice → II. Client IP-based (CLIENT_IP)
With Client IP affinity, the load balancer hashes only on the client’s source IP address, so all traffic from the same on-prem host—regardless of protocol or port—lands on the same backend VM.
That meets the requirement that an on-prem client using both HTTP (TCP) and TFTP (UDP) stay pinned to one instance.
Google Cloud documentation lists
CLIENT_IP(one-tuple) as an internal passthrough Network Load Balancer option, recommending it when you need cross-protocol stickiness. cloud.google.com | cloud.google.com
Why the other options don’t satisfy the goal
I. No session affinity (
NONE) – The balancer would hash each new connection independently, so an on-prem host could hit different backends for HTTP and TFTP.III. Client IP and protocol (
CLIENT_IP_PROTO) – The hash includes the protocol field; the TCP (HTTP) flow and UDP (TFTP) flow would bind to different VMs.IV. Client IP, port, and protocol (
CLIENT_IP_PORT_PROTO) – Even stricter: each combination of client IP, protocol, and source port is treated separately, so the two services definitely won’t share a backend.
By choosing Client IP-based affinity, you provide consistent, lowest-latency stickiness across both services without over-constraining the hash.
Explanation
Correct session-affinity choice → II. Client IP-based (CLIENT_IP)
With Client IP affinity, the load balancer hashes only on the client’s source IP address, so all traffic from the same on-prem host—regardless of protocol or port—lands on the same backend VM.
That meets the requirement that an on-prem client using both HTTP (TCP) and TFTP (UDP) stay pinned to one instance.
Google Cloud documentation lists
CLIENT_IP(one-tuple) as an internal passthrough Network Load Balancer option, recommending it when you need cross-protocol stickiness. cloud.google.com | cloud.google.com
Why the other options don’t satisfy the goal
I. No session affinity (
NONE) – The balancer would hash each new connection independently, so an on-prem host could hit different backends for HTTP and TFTP.III. Client IP and protocol (
CLIENT_IP_PROTO) – The hash includes the protocol field; the TCP (HTTP) flow and UDP (TFTP) flow would bind to different VMs.IV. Client IP, port, and protocol (
CLIENT_IP_PORT_PROTO) – Even stricter: each combination of client IP, protocol, and source port is treated separately, so the two services definitely won’t share a backend.
By choosing Client IP-based affinity, you provide consistent, lowest-latency stickiness across both services without over-constraining the hash.
Question 20 Single Choice
You have two routers in your on-premises data center, both connected to Google Cloud Platform (GCP) via VPNs. However, despite having two connections, all traffic is routed through a single VPN instead of being evenly distributed across both connections. Your troubleshooting reveals the following:
- Both on-premises routers have the same Autonomous System Number (ASN).
- The routing tables and priorities are identical on both on-premises routers.
- Each on-premises router has a VPN configured, both connecting to a single Cloud Router.
- VPN logs indicate "no-proposal-chosen" lines during connection attempts.
- One of the on-premises routers fails to establish a BGP session with the Cloud Router.
What is the most probable cause of this issue?
Explanation

Click "Show Answer" to see the explanation here
Correct option: A — One of the VPN tunnels is mis‑configured.
The tell‑tale sign is the no‑proposal‑chosen messages in your VPN logs. Google Cloud explains that this error means the two peers “were unable to agree on a set of ciphers,” which happens only when the IKE or IPsec parameters don’t match. A tunnel stuck in this state never reaches ESTABLISHED, so the Cloud Router on GCP can’t bring up the BGP session tied to it. Because only one tunnel is healthy, all traffic naturally flows through that single path; ECMP load balancing requires both tunnels to be up and exchanging routes. Google Cloud | Google Cloud | Google Cloud
Why the other choices are not the root cause
B. A firewall is blocking the second VPN.
A packet‑filter block would usually appear in logs as time‑outs or unreachable ports, not the specific IKE negotiation errorno‑proposal‑chosen. The error explicitly points to a mismatch in the VPN proposal, not dropped traffic.C. No load balancer is present.
Google Cloud VPN gateways automatically distribute flows across multiple healthy tunnels; you don’t deploy a separate load balancer for this. Load balancing fails only because one tunnel never becomes healthy in the first place. Google CloudD. BGP sessions can’t be established to the Cloud Router.
This is a visible symptom, not the underlying cause. BGP remains down because the IPsec tunnel is mis‑configured and never reaches the ESTABLISHED state required for the BGP handshake. Fixing the IKE/IPsec parameters resolves the BGP issue automatically. Google Cloud
In short: align the IKE and IPsec settings (encryption suite, DH group, lifetime, etc.) on the second on‑premises router with the settings configured for the Cloud VPN tunnel. Once the tunnel comes up, BGP will establish and traffic will balance across both connections as intended.
Explanation
Correct option: A — One of the VPN tunnels is mis‑configured.
The tell‑tale sign is the no‑proposal‑chosen messages in your VPN logs. Google Cloud explains that this error means the two peers “were unable to agree on a set of ciphers,” which happens only when the IKE or IPsec parameters don’t match. A tunnel stuck in this state never reaches ESTABLISHED, so the Cloud Router on GCP can’t bring up the BGP session tied to it. Because only one tunnel is healthy, all traffic naturally flows through that single path; ECMP load balancing requires both tunnels to be up and exchanging routes. Google Cloud | Google Cloud | Google Cloud
Why the other choices are not the root cause
B. A firewall is blocking the second VPN.
A packet‑filter block would usually appear in logs as time‑outs or unreachable ports, not the specific IKE negotiation errorno‑proposal‑chosen. The error explicitly points to a mismatch in the VPN proposal, not dropped traffic.C. No load balancer is present.
Google Cloud VPN gateways automatically distribute flows across multiple healthy tunnels; you don’t deploy a separate load balancer for this. Load balancing fails only because one tunnel never becomes healthy in the first place. Google CloudD. BGP sessions can’t be established to the Cloud Router.
This is a visible symptom, not the underlying cause. BGP remains down because the IPsec tunnel is mis‑configured and never reaches the ESTABLISHED state required for the BGP handshake. Fixing the IKE/IPsec parameters resolves the BGP issue automatically. Google Cloud
In short: align the IKE and IPsec settings (encryption suite, DH group, lifetime, etc.) on the second on‑premises router with the settings configured for the Cloud VPN tunnel. Once the tunnel comes up, BGP will establish and traffic will balance across both connections as intended.



