

Professional Cloud Architect - Google Cloud Certified Exam Questions
Total Questions
Last Updated
1st Try Guaranteed

Experts Verified
Question 11 Single Choice
A small percentage of API requests in your microservices-based application are experiencing high latency. Each request may pass through multiple services, and you need to identify which service is causing the delay. What should you do?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
D. Instrument your application with Stackdriver Trace to break down request latencies across microservices.
Explanation:
In a microservices-based application, where each request may pass through multiple services, latency issues can originate from different components.
✅ Google Cloud’s Stackdriver Trace (Cloud Trace) is the best solution because:
It tracks request flows across microservices, helping pinpoint the exact service causing the delay.
It provides detailed latency breakdowns for each API request.
It helps identify performance bottlenecks in dependencies like databases, caches, or external services.
Automatically integrates with GKE, Compute Engine, App Engine, and Cloud Run.
How It Works:
Instrument your services with Stackdriver Trace SDKs (or use built-in integration for Google Cloud services).
Each request generates a trace ID that follows the request across all microservices.
Use Stackdriver Trace UI to analyze end-to-end request latency and identify slow services or operations.
✅ Supporting Documentation:
Google Cloud Trace Overview:
https://cloud.google.com/trace/docs/Distributed Tracing for Microservices:
https://cloud.google.com/blog/topics/developers-practitioners/distributed-tracing-microservices-google-cloud
Why Other Options Are Incorrect:
A. Configure timeouts in your application to fail slow requests more quickly. ❌
Timeouts help mitigate latency issues but do not diagnose the root cause of high-latency services.
Without identifying which service is slow, adding timeouts can lead to unnecessary failures and degraded user experience.
✅ Reference:
https://cloud.google.com/architecture/microservices-timeouts-retries
B. Send custom metrics for each request to Stackdriver Monitoring. ❌
Metrics provide aggregated latency data, but they do not show per-request breakdowns across microservices.
Stackdriver Monitoring does not provide trace-level visibility like Stackdriver Trace.
✅ Reference:
https://cloud.google.com/monitoring/docs/
C. Use Stackdriver Monitoring to analyze insights when API latency is high. ❌
Stackdriver Monitoring detects latency spikes but does not trace requests across microservices.
Stackdriver Trace is required for end-to-end request latency analysis.
✅ Reference:
https://cloud.google.com/monitoring/docs/overview
Conclusion:
The best approach is D. Instrument your application with Stackdriver Trace to analyze end-to-end request latency across microservices.
Pinpoints slow microservices.
Provides real-time tracing insights.
Helps optimize API performance.
📌 Final Reference:
https://cloud.google.com/trace/docs/
Explanation
Correct Answer:
D. Instrument your application with Stackdriver Trace to break down request latencies across microservices.
Explanation:
In a microservices-based application, where each request may pass through multiple services, latency issues can originate from different components.
✅ Google Cloud’s Stackdriver Trace (Cloud Trace) is the best solution because:
It tracks request flows across microservices, helping pinpoint the exact service causing the delay.
It provides detailed latency breakdowns for each API request.
It helps identify performance bottlenecks in dependencies like databases, caches, or external services.
Automatically integrates with GKE, Compute Engine, App Engine, and Cloud Run.
How It Works:
Instrument your services with Stackdriver Trace SDKs (or use built-in integration for Google Cloud services).
Each request generates a trace ID that follows the request across all microservices.
Use Stackdriver Trace UI to analyze end-to-end request latency and identify slow services or operations.
✅ Supporting Documentation:
Google Cloud Trace Overview:
https://cloud.google.com/trace/docs/Distributed Tracing for Microservices:
https://cloud.google.com/blog/topics/developers-practitioners/distributed-tracing-microservices-google-cloud
Why Other Options Are Incorrect:
A. Configure timeouts in your application to fail slow requests more quickly. ❌
Timeouts help mitigate latency issues but do not diagnose the root cause of high-latency services.
Without identifying which service is slow, adding timeouts can lead to unnecessary failures and degraded user experience.
✅ Reference:
https://cloud.google.com/architecture/microservices-timeouts-retries
B. Send custom metrics for each request to Stackdriver Monitoring. ❌
Metrics provide aggregated latency data, but they do not show per-request breakdowns across microservices.
Stackdriver Monitoring does not provide trace-level visibility like Stackdriver Trace.
✅ Reference:
https://cloud.google.com/monitoring/docs/
C. Use Stackdriver Monitoring to analyze insights when API latency is high. ❌
Stackdriver Monitoring detects latency spikes but does not trace requests across microservices.
Stackdriver Trace is required for end-to-end request latency analysis.
✅ Reference:
https://cloud.google.com/monitoring/docs/overview
Conclusion:
The best approach is D. Instrument your application with Stackdriver Trace to analyze end-to-end request latency across microservices.
Pinpoints slow microservices.
Provides real-time tracing insights.
Helps optimize API performance.
📌 Final Reference:
https://cloud.google.com/trace/docs/
Question 12 Single Choice
You have deployed several preemptible Linux virtual machine instances on Google Compute Engine. To ensure that your application properly shuts down before the virtual machines are preempted, what should you do?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
C. Create a shutdown script and set it as the value for a new metadata entry with the key shutdown-script in the Cloud Console when creating the instance.
Explanation:
Preemptible Google Compute Engine (GCE) instances can be terminated at any time by Google Cloud. To ensure that an application properly shuts down before preemption:
✅ Best Approach: Use the shutdown-script Metadata Key
Google Cloud allows configuring shutdown scripts via instance metadata.
These scripts execute automatically when a VM is preempted, ensuring graceful shutdown.
The script should be stored as a custom metadata key (
shutdown-script) and can include commands to flush logs, close database connections, or gracefully terminate services.
Example Implementation:
Create a shutdown script (e.g.,
shutdown.sh):- #!/bin/bash
- echo "Shutting down gracefully..." >> /var/log/shutdown.log
- systemctl stop my-app.service
- sync
Set the script in the instance metadata (
shutdown-scriptkey):- gcloud compute instances add-metadata my-instance \
- --metadata shutdown-script="$(cat shutdown.sh)"
✅ Supporting Documentation:
Shutdown Scripts in Compute Engine:
https://cloud.google.com/compute/docs/shutdownscript
Why Other Options Are Incorrect:
A. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory. ❌
Incorrect because modern Linux systems use
systemd, notSysVinit, meaning scripts in/etc/rc.6.d/may not run on all distributions.Google Cloud metadata-based shutdown scripts are a more reliable approach.
✅ Reference:
https://cloud.google.com/compute/docs/instances/startup-scripts/linux
B. Register a shutdown script as an xinetd service in Linux and configure a Stackdriver endpoint check to call the service. ❌
xinetd is a legacy daemon for managing network services—it is not needed for handling shutdown events.
Stackdriver Monitoring (now Cloud Monitoring) is used for metrics/logging, not triggering shutdown scripts.
✅ Reference:
https://cloud.google.com/monitoring/docs
D. Register a shutdown script as an xinetd service in Linux and use gcloud compute instances add-metadata to specify the service URL as the value for shutdown-script-url. ❌
Similar to Option B, xinetd is unrelated to shutdown script execution.
The correct metadata key is
shutdown-script, notshutdown-script-url.
✅ Reference:
https://cloud.google.com/compute/docs/shutdownscript
Conclusion:
The best solution is C. Use the shutdown-script metadata key, as it:
Executes automatically before preemption.
Allows clean shutdown of applications and services.
Does not rely on outdated or complex mechanisms.
📌 Final Reference:
https://cloud.google.com/compute/docs/shutdownscript
Explanation
Correct Answer:
C. Create a shutdown script and set it as the value for a new metadata entry with the key shutdown-script in the Cloud Console when creating the instance.
Explanation:
Preemptible Google Compute Engine (GCE) instances can be terminated at any time by Google Cloud. To ensure that an application properly shuts down before preemption:
✅ Best Approach: Use the shutdown-script Metadata Key
Google Cloud allows configuring shutdown scripts via instance metadata.
These scripts execute automatically when a VM is preempted, ensuring graceful shutdown.
The script should be stored as a custom metadata key (
shutdown-script) and can include commands to flush logs, close database connections, or gracefully terminate services.
Example Implementation:
Create a shutdown script (e.g.,
shutdown.sh):- #!/bin/bash
- echo "Shutting down gracefully..." >> /var/log/shutdown.log
- systemctl stop my-app.service
- sync
Set the script in the instance metadata (
shutdown-scriptkey):- gcloud compute instances add-metadata my-instance \
- --metadata shutdown-script="$(cat shutdown.sh)"
✅ Supporting Documentation:
Shutdown Scripts in Compute Engine:
https://cloud.google.com/compute/docs/shutdownscript
Why Other Options Are Incorrect:
A. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory. ❌
Incorrect because modern Linux systems use
systemd, notSysVinit, meaning scripts in/etc/rc.6.d/may not run on all distributions.Google Cloud metadata-based shutdown scripts are a more reliable approach.
✅ Reference:
https://cloud.google.com/compute/docs/instances/startup-scripts/linux
B. Register a shutdown script as an xinetd service in Linux and configure a Stackdriver endpoint check to call the service. ❌
xinetd is a legacy daemon for managing network services—it is not needed for handling shutdown events.
Stackdriver Monitoring (now Cloud Monitoring) is used for metrics/logging, not triggering shutdown scripts.
✅ Reference:
https://cloud.google.com/monitoring/docs
D. Register a shutdown script as an xinetd service in Linux and use gcloud compute instances add-metadata to specify the service URL as the value for shutdown-script-url. ❌
Similar to Option B, xinetd is unrelated to shutdown script execution.
The correct metadata key is
shutdown-script, notshutdown-script-url.
✅ Reference:
https://cloud.google.com/compute/docs/shutdownscript
Conclusion:
The best solution is C. Use the shutdown-script metadata key, as it:
Executes automatically before preemption.
Allows clean shutdown of applications and services.
Does not rely on outdated or complex mechanisms.
📌 Final Reference:
https://cloud.google.com/compute/docs/shutdownscript
Question 13 Single Choice
You are deploying an application on App Engine that needs to integrate with an on-premises database. For security reasons, the on-premises database must not be accessible via the public internet.
What should you do?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
✅ D. Deploy the application on App Engine flexible environment and use Cloud VPN to establish a private connection to the on-premises database.
Explanation:
Since the on-premises database must not be accessible via the public internet, you need to:
Use a private connection between Google Cloud and the on-premises network.
Deploy the application on App Engine flexible environment, as App Engine standard does not support direct network access to Cloud VPN.
Establish a Cloud VPN connection to securely connect App Engine to the on-premises database over a private network.
Why This Works?
✅ App Engine Flexible Environment
Supports network configuration that allows direct communication with a private on-premises network via Cloud VPN.
Uses Compute Engine VMs, which have configurable networking capabilities.
Allows private IP connectivity to non-publicly accessible resources.
✅ Cloud VPN
Enables a secure, private connection between Google Cloud and on-premises networks.
Traffic never traverses the public internet, ensuring security compliance.
Implementation Steps:
Set up Cloud VPN to establish a private connection to the on-premises database.
Configure firewall rules to allow App Engine flexible instances to communicate with the on-prem database.
Ensure the App Engine service account has the necessary IAM roles to access the database.
✅ Supporting Documentation:
App Engine Flexible Networking:
https://cloud.google.com/appengine/docs/flexibleUsing Cloud VPN for Hybrid Connectivity:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Why Other Options Are Incorrect?
A. Deploy the application on App Engine standard environment and use App Engine firewall rules to restrict access to the open on-premises database. ❌
App Engine standard does not support direct network access to Cloud VPN or private IPs.
Firewall rules do not provide a private connection to the on-premises database.
This option would still expose the database over the public internet.
✅ Reference:
https://cloud.google.com/appengine/docs/standard
B. Deploy the application on App Engine standard environment and use Cloud VPN to establish a private connection to the on-premises database. ❌
App Engine standard cannot connect directly to Cloud VPN or private networks.
Standard environment apps are sandboxed and do not allow custom networking configurations.
✅ Reference:
https://cloud.google.com/appengine/docs/standard
C. Deploy the application on App Engine flexible environment and use App Engine firewall rules to restrict access to the on-premises database. ❌
Firewall rules only restrict access but do not establish a private connection.
A VPN is still required to ensure the on-premises database is accessible privately.
✅ Reference:
https://cloud.google.com/appengine/docs/flexible
Conclusion:
The best approach is D. Deploy the application on App Engine flexible environment and use Cloud VPN to establish a private connection to the on-premises database, because:
App Engine flexible supports private networking.
Cloud VPN ensures secure, private connectivity.
It aligns with Google Cloud best practices for hybrid networking.
📌 Final Reference:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Explanation
Correct Answer:
✅ D. Deploy the application on App Engine flexible environment and use Cloud VPN to establish a private connection to the on-premises database.
Explanation:
Since the on-premises database must not be accessible via the public internet, you need to:
Use a private connection between Google Cloud and the on-premises network.
Deploy the application on App Engine flexible environment, as App Engine standard does not support direct network access to Cloud VPN.
Establish a Cloud VPN connection to securely connect App Engine to the on-premises database over a private network.
Why This Works?
✅ App Engine Flexible Environment
Supports network configuration that allows direct communication with a private on-premises network via Cloud VPN.
Uses Compute Engine VMs, which have configurable networking capabilities.
Allows private IP connectivity to non-publicly accessible resources.
✅ Cloud VPN
Enables a secure, private connection between Google Cloud and on-premises networks.
Traffic never traverses the public internet, ensuring security compliance.
Implementation Steps:
Set up Cloud VPN to establish a private connection to the on-premises database.
Configure firewall rules to allow App Engine flexible instances to communicate with the on-prem database.
Ensure the App Engine service account has the necessary IAM roles to access the database.
✅ Supporting Documentation:
App Engine Flexible Networking:
https://cloud.google.com/appengine/docs/flexibleUsing Cloud VPN for Hybrid Connectivity:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Why Other Options Are Incorrect?
A. Deploy the application on App Engine standard environment and use App Engine firewall rules to restrict access to the open on-premises database. ❌
App Engine standard does not support direct network access to Cloud VPN or private IPs.
Firewall rules do not provide a private connection to the on-premises database.
This option would still expose the database over the public internet.
✅ Reference:
https://cloud.google.com/appengine/docs/standard
B. Deploy the application on App Engine standard environment and use Cloud VPN to establish a private connection to the on-premises database. ❌
App Engine standard cannot connect directly to Cloud VPN or private networks.
Standard environment apps are sandboxed and do not allow custom networking configurations.
✅ Reference:
https://cloud.google.com/appengine/docs/standard
C. Deploy the application on App Engine flexible environment and use App Engine firewall rules to restrict access to the on-premises database. ❌
Firewall rules only restrict access but do not establish a private connection.
A VPN is still required to ensure the on-premises database is accessible privately.
✅ Reference:
https://cloud.google.com/appengine/docs/flexible
Conclusion:
The best approach is D. Deploy the application on App Engine flexible environment and use Cloud VPN to establish a private connection to the on-premises database, because:
App Engine flexible supports private networking.
Cloud VPN ensures secure, private connectivity.
It aligns with Google Cloud best practices for hybrid networking.
📌 Final Reference:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Question 14 Single Choice
Your organization has a 3-tier web application deployed within the same network on Google Cloud Platform. Each tier—web, API, and database—scales independently. The network traffic should flow from the web tier to the API tier, and then to the database tier, ensuring that traffic does not flow directly between the web and database tiers.
How should you configure the network to enforce this traffic flow?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
D. Use tags for each tier and configure firewall rules to enforce the desired traffic flow.
Explanation:
To enforce controlled traffic flow between the web, API, and database tiers, firewall rules should be used. In Google Cloud, firewall rules can be applied based on network tags, ensuring:
✅ Why Use Firewall Rules with Tags?
Network tags allow easy grouping of instances based on function (e.g.,
web-tier,api-tier,db-tier).Firewall rules restrict traffic between tiers, ensuring that:
Web tier can only communicate with API tier.
API tier can communicate with both web and database tiers.
Database tier only accepts connections from the API tier.
More scalable and manageable than manual instance-based firewall rules.
Implementation Steps:
Assign network tags to instances:
Web instances →
web-tierAPI instances →
api-tierDatabase instances →
db-tier
Create firewall rules:
Allow traffic from
web-tier→api-tier:- gcloud compute firewall-rules create allow-web-to-api \
- --direction=INGRESS --action=ALLOW --priority=1000 \
- --source-tags=web-tier --target-tags=api-tier \
- --rules=tcp:443,tcp:80
Allow traffic from
api-tier→db-tier:- gcloud compute firewall-rules create allow-api-to-db \
- --direction=INGRESS --action=ALLOW --priority=1000 \
- --source-tags=api-tier --target-tags=db-tier \
- --rules=tcp:5432
Deny direct traffic from
web-tier→db-tier:- gcloud compute firewall-rules create deny-web-to-db \
- --direction=INGRESS --action=DENY --priority=900 \
- --source-tags=web-tier --target-tags=db-tier \
- --rules=tcp:5432
✅ Supporting Documentation:
Firewall Rules in Google Cloud:
https://cloud.google.com/vpc/docs/firewallsUsing Tags in Firewall Rules:
https://cloud.google.com/vpc/docs/firewalls#network-tags
Why Other Options Are Incorrect:
A. Assign each tier to a different subnetwork. ❌
Subnetworks define IP ranges but do not enforce security policies.
Without firewall rules, instances in separate subnets can still communicate freely.
Subnet segmentation alone does not restrict traffic flow.
✅ Reference:
https://cloud.google.com/vpc/docs/subnets
B. Configure software-based firewalls on individual virtual machines (VMs). ❌
Managing firewalls on each VM is complex and not scalable for large deployments.
Google Cloud firewall rules apply at the VPC level, making VM-based firewalls unnecessary.
More error-prone and difficult to maintain than centrally managed firewall rules.
✅ Reference:
https://cloud.google.com/vpc/docs/firewalls
C. Use tags for each tier and configure routes to enforce the desired traffic flow. ❌
Routes control how traffic is forwarded, but they do not enforce security restrictions.
Firewall rules, not routes, define which traffic is allowed or denied.
A route cannot block or enforce directional traffic flows.
✅ Reference:
https://cloud.google.com/vpc/docs/routes
Conclusion:
The best solution is D. Use tags for each tier and configure firewall rules to enforce the desired traffic flow, because:
Firewall rules enforce traffic control between tiers.
Tags provide a scalable and flexible way to manage security policies.
This ensures that the web tier only communicates with the API tier, and the database remains isolated.
📌 Final Reference:
https://cloud.google.com/vpc/docs/firewalls#network-tags
Explanation
Correct Answer:
D. Use tags for each tier and configure firewall rules to enforce the desired traffic flow.
Explanation:
To enforce controlled traffic flow between the web, API, and database tiers, firewall rules should be used. In Google Cloud, firewall rules can be applied based on network tags, ensuring:
✅ Why Use Firewall Rules with Tags?
Network tags allow easy grouping of instances based on function (e.g.,
web-tier,api-tier,db-tier).Firewall rules restrict traffic between tiers, ensuring that:
Web tier can only communicate with API tier.
API tier can communicate with both web and database tiers.
Database tier only accepts connections from the API tier.
More scalable and manageable than manual instance-based firewall rules.
Implementation Steps:
Assign network tags to instances:
Web instances →
web-tierAPI instances →
api-tierDatabase instances →
db-tier
Create firewall rules:
Allow traffic from
web-tier→api-tier:- gcloud compute firewall-rules create allow-web-to-api \
- --direction=INGRESS --action=ALLOW --priority=1000 \
- --source-tags=web-tier --target-tags=api-tier \
- --rules=tcp:443,tcp:80
Allow traffic from
api-tier→db-tier:- gcloud compute firewall-rules create allow-api-to-db \
- --direction=INGRESS --action=ALLOW --priority=1000 \
- --source-tags=api-tier --target-tags=db-tier \
- --rules=tcp:5432
Deny direct traffic from
web-tier→db-tier:- gcloud compute firewall-rules create deny-web-to-db \
- --direction=INGRESS --action=DENY --priority=900 \
- --source-tags=web-tier --target-tags=db-tier \
- --rules=tcp:5432
✅ Supporting Documentation:
Firewall Rules in Google Cloud:
https://cloud.google.com/vpc/docs/firewallsUsing Tags in Firewall Rules:
https://cloud.google.com/vpc/docs/firewalls#network-tags
Why Other Options Are Incorrect:
A. Assign each tier to a different subnetwork. ❌
Subnetworks define IP ranges but do not enforce security policies.
Without firewall rules, instances in separate subnets can still communicate freely.
Subnet segmentation alone does not restrict traffic flow.
✅ Reference:
https://cloud.google.com/vpc/docs/subnets
B. Configure software-based firewalls on individual virtual machines (VMs). ❌
Managing firewalls on each VM is complex and not scalable for large deployments.
Google Cloud firewall rules apply at the VPC level, making VM-based firewalls unnecessary.
More error-prone and difficult to maintain than centrally managed firewall rules.
✅ Reference:
https://cloud.google.com/vpc/docs/firewalls
C. Use tags for each tier and configure routes to enforce the desired traffic flow. ❌
Routes control how traffic is forwarded, but they do not enforce security restrictions.
Firewall rules, not routes, define which traffic is allowed or denied.
A route cannot block or enforce directional traffic flows.
✅ Reference:
https://cloud.google.com/vpc/docs/routes
Conclusion:
The best solution is D. Use tags for each tier and configure firewall rules to enforce the desired traffic flow, because:
Firewall rules enforce traffic control between tiers.
Tags provide a scalable and flexible way to manage security policies.
This ensures that the web tier only communicates with the API tier, and the database remains isolated.
📌 Final Reference:
https://cloud.google.com/vpc/docs/firewalls#network-tags
Question 15 Single Choice
You have discovered an error in your App Engine application caused by missing Cloud Datastore indexes. To resolve this, you have created a YAML configuration file specifying the required indexes.
How should you deploy these new indexes to Cloud Datastore?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
✅ A. Use the command gcloud datastore indexes create and specify your configuration file.
Explanation:
To deploy new indexes to Cloud Datastore, you need to use the Google Cloud CLI (gcloud) to apply the YAML configuration file that defines the indexes.
Correct Procedure to Deploy Indexes:
Create or modify an
index.yamlfile that defines the required indexes.Deploy the indexes using the following command:
- gcloud datastore indexes create index.yaml
Verify that the indexes are being built by checking the status in the Google Cloud Console → Datastore → Indexes.
✅ Why This Works:
gcloud datastore indexes createis the recommended way to deploy index configurations.Indexes must be explicitly created in Cloud Datastore before queries can use them.
Once submitted, indexes are automatically built by Cloud Datastore and available for querying.
✅ Supporting Documentation:
Managing Datastore Indexes:
https://cloud.google.com/datastore/docs/tools/indexconfigDeploying Indexes Using
gcloudCLI:
https://cloud.google.com/datastore/docs/creating-indexes
Why Other Options Are Incorrect?
B. Upload the YAML configuration file to App Engine’s default Cloud Storage bucket, allowing App Engine to detect the new indexes. ❌
App Engine does not automatically detect new index configurations from Cloud Storage.
Cloud Datastore requires an explicit deployment using the
gcloudCLI.Storing the YAML file in Cloud Storage has no effect on Datastore indexing.
✅ Reference:
https://cloud.google.com/datastore/docs/tools/indexconfig
C. In the Google Cloud Console, navigate to Datastore Admin, delete the current indexes, and upload the new configuration file. ❌
There is no option to manually upload an
index.yamlfile in the Google Cloud Console.Deleting indexes can break existing queries—instead, new indexes should be added incrementally.
Indexes are created and modified using the
gcloudCLI or the Datastore Admin API.
✅ Reference:
https://cloud.google.com/datastore/docs/tools/indexconfig
D. Send an HTTP request to the built-in Python module to upload the index configuration file to your application. ❌
There is no built-in Python module in App Engine for deploying Datastore indexes.
Index creation must be explicitly managed via
gcloud datastore indexes create.Datastore does not support uploading index configurations via HTTP requests.
✅ Reference:
https://cloud.google.com/datastore/docs/creating-indexes
Conclusion:
The best approach is A. Use the command gcloud datastore indexes create and specify your configuration file, because:
This is the official method to deploy Cloud Datastore indexes.
It ensures the indexes are properly registered and built.
It follows Google Cloud’s best practices for managing index configurations.
📌 Final Reference:
https://cloud.google.com/datastore/docs/tools/indexconfig
Explanation
Correct Answer:
✅ A. Use the command gcloud datastore indexes create and specify your configuration file.
Explanation:
To deploy new indexes to Cloud Datastore, you need to use the Google Cloud CLI (gcloud) to apply the YAML configuration file that defines the indexes.
Correct Procedure to Deploy Indexes:
Create or modify an
index.yamlfile that defines the required indexes.Deploy the indexes using the following command:
- gcloud datastore indexes create index.yaml
Verify that the indexes are being built by checking the status in the Google Cloud Console → Datastore → Indexes.
✅ Why This Works:
gcloud datastore indexes createis the recommended way to deploy index configurations.Indexes must be explicitly created in Cloud Datastore before queries can use them.
Once submitted, indexes are automatically built by Cloud Datastore and available for querying.
✅ Supporting Documentation:
Managing Datastore Indexes:
https://cloud.google.com/datastore/docs/tools/indexconfigDeploying Indexes Using
gcloudCLI:
https://cloud.google.com/datastore/docs/creating-indexes
Why Other Options Are Incorrect?
B. Upload the YAML configuration file to App Engine’s default Cloud Storage bucket, allowing App Engine to detect the new indexes. ❌
App Engine does not automatically detect new index configurations from Cloud Storage.
Cloud Datastore requires an explicit deployment using the
gcloudCLI.Storing the YAML file in Cloud Storage has no effect on Datastore indexing.
✅ Reference:
https://cloud.google.com/datastore/docs/tools/indexconfig
C. In the Google Cloud Console, navigate to Datastore Admin, delete the current indexes, and upload the new configuration file. ❌
There is no option to manually upload an
index.yamlfile in the Google Cloud Console.Deleting indexes can break existing queries—instead, new indexes should be added incrementally.
Indexes are created and modified using the
gcloudCLI or the Datastore Admin API.
✅ Reference:
https://cloud.google.com/datastore/docs/tools/indexconfig
D. Send an HTTP request to the built-in Python module to upload the index configuration file to your application. ❌
There is no built-in Python module in App Engine for deploying Datastore indexes.
Index creation must be explicitly managed via
gcloud datastore indexes create.Datastore does not support uploading index configurations via HTTP requests.
✅ Reference:
https://cloud.google.com/datastore/docs/creating-indexes
Conclusion:
The best approach is A. Use the command gcloud datastore indexes create and specify your configuration file, because:
This is the official method to deploy Cloud Datastore indexes.
It ensures the indexes are properly registered and built.
It follows Google Cloud’s best practices for managing index configurations.
📌 Final Reference:
https://cloud.google.com/datastore/docs/tools/indexconfig
Question 16 Single Choice
You are working in a highly secured environment where public internet access from Compute Engine VMs is not allowed. There is no VPN connection to access an on-premises file server yet. You need to install specific software on a Compute Engine instance.
How should you install the software?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
✅ A. Upload the installation files to Cloud Storage, configure the VM on a subnet with Private Google Access, assign only an internal IP to the VM, and download the files using gsutil.
Explanation:
Since the Compute Engine VM is in a highly secured environment with no public internet access, and there is no VPN connection to an on-prem file server, you need a private and secure way to download installation files.
Why This Works?
Cloud Storage is the best place to store installation files because it is a fully managed, secure, and scalable storage service.
Private Google Access (PGA) allows VMs with only internal IPs to access Google APIs and services (such as Cloud Storage) without requiring public internet access.
Using
gsutilover Private Google Access ensures secure and restricted access to the installation files.
Implementation Steps:
Upload the installation files to a Cloud Storage bucket:
- bash
- CopyEditgsutil cp software_package.tar.gz gs://my-secure-bucket/
Ensure the VM is in a VPC subnet with Private Google Access enabled:
In Google Cloud Console, navigate to VPC network → Subnets → Edit subnet.
Enable Private Google Access for the subnet.
Assign only an internal IP to the VM (no public IP).
Download the files to the VM using
gsutil:- bash
- CopyEditgsutil cp gs://my-secure-bucket/software_package.tar.gz .
✅ Supporting Documentation:
Private Google Access Overview:
https://cloud.google.com/vpc/docs/configure-private-google-accessUsing gsutil with Private Google Access:
https://cloud.google.com/storage/docs/gsutil
Why Other Options Are Incorrect?
B. Upload the installation files to Cloud Storage, configure firewall rules to block all traffic except the IP address range for Cloud Storage, and download the files using gsutil. ❌
Firewall rules do not control access to Google services like Cloud Storage—Private Google Access does.
Google Cloud services like Cloud Storage do not have fixed IP address ranges that can be explicitly allowed in firewall rules.
PGA is the recommended approach for accessing Cloud Storage without public internet.
✅ Reference:
https://cloud.google.com/vpc/docs/configure-private-google-access
C. Upload the installation files to Cloud Source Repositories, configure the VM on a subnet with Private Google Access, assign only an internal IP to the VM, and download the files using gcloud. ❌
Cloud Source Repositories is designed for source code, not for large installation files.
Cloud Storage is the recommended service for storing and distributing files.
Downloading binary files using
gcloud source reposis not the correct approach.
✅ Reference:
https://cloud.google.com/source-repositories/docs/
D. Upload the installation files to Cloud Source Repositories, configure firewall rules to block all traffic except the IP address range for Cloud Source Repositories, and download the files using gsutil. ❌
Cloud Source Repositories is not meant for binary file distribution.
gsutilis used to download from Cloud Storage, not Cloud Source Repositories.Firewall rules cannot selectively allow Cloud Source Repositories while blocking all other internet access.
✅ Reference:
https://cloud.google.com/source-repositories/docs/access-control
Conclusion:
The best approach is A. Upload the installation files to Cloud Storage, configure the VM on a subnet with Private Google Access, assign only an internal IP to the VM, and download the files using gsutil, because:
Cloud Storage is the recommended service for securely storing installation files.
Private Google Access allows secure access to Cloud Storage without public internet.
The VM remains private and meets the security requirements.
📌 Final Reference:
https://cloud.google.com/vpc/docs/configure-private-google-access
Explanation
Correct Answer:
✅ A. Upload the installation files to Cloud Storage, configure the VM on a subnet with Private Google Access, assign only an internal IP to the VM, and download the files using gsutil.
Explanation:
Since the Compute Engine VM is in a highly secured environment with no public internet access, and there is no VPN connection to an on-prem file server, you need a private and secure way to download installation files.
Why This Works?
Cloud Storage is the best place to store installation files because it is a fully managed, secure, and scalable storage service.
Private Google Access (PGA) allows VMs with only internal IPs to access Google APIs and services (such as Cloud Storage) without requiring public internet access.
Using
gsutilover Private Google Access ensures secure and restricted access to the installation files.
Implementation Steps:
Upload the installation files to a Cloud Storage bucket:
- bash
- CopyEditgsutil cp software_package.tar.gz gs://my-secure-bucket/
Ensure the VM is in a VPC subnet with Private Google Access enabled:
In Google Cloud Console, navigate to VPC network → Subnets → Edit subnet.
Enable Private Google Access for the subnet.
Assign only an internal IP to the VM (no public IP).
Download the files to the VM using
gsutil:- bash
- CopyEditgsutil cp gs://my-secure-bucket/software_package.tar.gz .
✅ Supporting Documentation:
Private Google Access Overview:
https://cloud.google.com/vpc/docs/configure-private-google-accessUsing gsutil with Private Google Access:
https://cloud.google.com/storage/docs/gsutil
Why Other Options Are Incorrect?
B. Upload the installation files to Cloud Storage, configure firewall rules to block all traffic except the IP address range for Cloud Storage, and download the files using gsutil. ❌
Firewall rules do not control access to Google services like Cloud Storage—Private Google Access does.
Google Cloud services like Cloud Storage do not have fixed IP address ranges that can be explicitly allowed in firewall rules.
PGA is the recommended approach for accessing Cloud Storage without public internet.
✅ Reference:
https://cloud.google.com/vpc/docs/configure-private-google-access
C. Upload the installation files to Cloud Source Repositories, configure the VM on a subnet with Private Google Access, assign only an internal IP to the VM, and download the files using gcloud. ❌
Cloud Source Repositories is designed for source code, not for large installation files.
Cloud Storage is the recommended service for storing and distributing files.
Downloading binary files using
gcloud source reposis not the correct approach.
✅ Reference:
https://cloud.google.com/source-repositories/docs/
D. Upload the installation files to Cloud Source Repositories, configure firewall rules to block all traffic except the IP address range for Cloud Source Repositories, and download the files using gsutil. ❌
Cloud Source Repositories is not meant for binary file distribution.
gsutilis used to download from Cloud Storage, not Cloud Source Repositories.Firewall rules cannot selectively allow Cloud Source Repositories while blocking all other internet access.
✅ Reference:
https://cloud.google.com/source-repositories/docs/access-control
Conclusion:
The best approach is A. Upload the installation files to Cloud Storage, configure the VM on a subnet with Private Google Access, assign only an internal IP to the VM, and download the files using gsutil, because:
Cloud Storage is the recommended service for securely storing installation files.
Private Google Access allows secure access to Cloud Storage without public internet.
The VM remains private and meets the security requirements.
📌 Final Reference:
https://cloud.google.com/vpc/docs/configure-private-google-access
Question 17 Single Choice
You have built a deployment pipeline that pushes source code changes to your infrastructure in instance groups with self-healing capabilities. A recent change has negatively impacted a key performance indicator (KPI), and the root cause is unknown. Investigating the issue could take up to a week.
What should you do?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
✅ B. Revert the source code change and rerun the deployment pipeline.
Explanation:
Since the recent deployment negatively impacted a key performance indicator (KPI) and the root cause is unknown, the best approach is to:
Revert the source code to the previous stable version.
Rerun the deployment pipeline to automatically push the stable version back to production.
This ensures:
Minimal downtime while the issue is investigated.
Automated rollback using the deployment pipeline, which is safer and prevents manual errors.
Consistent deployment practices, avoiding ad-hoc fixes on individual servers.
✅ Supporting Documentation:
Best Practices for Deployment Pipelines:
https://cloud.google.com/architecture/devops/devops-tech-continuous-deployment
Why Other Options Are Incorrect?
A. Log in to a server and manually iterate on the fix locally. ❌
Manually debugging in production is risky—changes could introduce more issues.
Instance groups have self-healing capabilities, meaning manual fixes may be lost when instances are recreated.
The deployment pipeline should be used for controlled and tested updates.
✅ Reference:
https://cloud.google.com/compute/docs/instance-groups
C. Log into the affected servers and manually replace the bad code with the previous version. ❌
Manual changes to individual servers will be overridden when the instance group recreates instances.
Not scalable, as instance groups automatically replace unhealthy VMs.
Deployment pipelines should manage rollbacks, not manual updates on individual machines.
✅ Reference:
https://cloud.google.com/deployment-manager/docs
D. Change the instance group template to the previous version and delete all instances. ❌
Overly disruptive—deleting instances can cause downtime.
Changing the template is useful, but the pipeline should handle rollbacks automatically.
Reverting the source code (Option B) is faster and follows DevOps best practices.
✅ Reference:
https://cloud.google.com/compute/docs/instance-templates
Conclusion:
The best approach is B. Revert the source code change and rerun the deployment pipeline, because:
It restores a stable version quickly while allowing investigation.
It prevents manual intervention in production.
It aligns with best DevOps practices.
📌 Final Reference:
https://cloud.google.com/architecture/devops/devops-tech-continuous-deployment
Explanation
Correct Answer:
✅ B. Revert the source code change and rerun the deployment pipeline.
Explanation:
Since the recent deployment negatively impacted a key performance indicator (KPI) and the root cause is unknown, the best approach is to:
Revert the source code to the previous stable version.
Rerun the deployment pipeline to automatically push the stable version back to production.
This ensures:
Minimal downtime while the issue is investigated.
Automated rollback using the deployment pipeline, which is safer and prevents manual errors.
Consistent deployment practices, avoiding ad-hoc fixes on individual servers.
✅ Supporting Documentation:
Best Practices for Deployment Pipelines:
https://cloud.google.com/architecture/devops/devops-tech-continuous-deployment
Why Other Options Are Incorrect?
A. Log in to a server and manually iterate on the fix locally. ❌
Manually debugging in production is risky—changes could introduce more issues.
Instance groups have self-healing capabilities, meaning manual fixes may be lost when instances are recreated.
The deployment pipeline should be used for controlled and tested updates.
✅ Reference:
https://cloud.google.com/compute/docs/instance-groups
C. Log into the affected servers and manually replace the bad code with the previous version. ❌
Manual changes to individual servers will be overridden when the instance group recreates instances.
Not scalable, as instance groups automatically replace unhealthy VMs.
Deployment pipelines should manage rollbacks, not manual updates on individual machines.
✅ Reference:
https://cloud.google.com/deployment-manager/docs
D. Change the instance group template to the previous version and delete all instances. ❌
Overly disruptive—deleting instances can cause downtime.
Changing the template is useful, but the pipeline should handle rollbacks automatically.
Reverting the source code (Option B) is faster and follows DevOps best practices.
✅ Reference:
https://cloud.google.com/compute/docs/instance-templates
Conclusion:
The best approach is B. Revert the source code change and rerun the deployment pipeline, because:
It restores a stable version quickly while allowing investigation.
It prevents manual intervention in production.
It aligns with best DevOps practices.
📌 Final Reference:
https://cloud.google.com/architecture/devops/devops-tech-continuous-deployment
Question 18 Single Choice
Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis. To comply with privacy and security requirements, what is the recommended approach for removing personally identifiable information (PII) and payment card information (PCI) before storing the data?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
✅ C. De-identify the data using the Cloud Data Loss Prevention (DLP) API.
Explanation:
To comply with privacy and security requirements when storing personally identifiable information (PII) and payment card information (PCI) in Cloud Bigtable, the best approach is to use Google Cloud Data Loss Prevention (DLP) API for de-identification.
✅ Why Cloud DLP API?
Automatically detects and removes PII & PCI (e.g., names, emails, credit card numbers).
Supports tokenization, redaction, and pseudonymization, ensuring compliance with GDPR, HIPAA, and PCI DSS.
More reliable than regex-based solutions, which may miss complex data patterns.
Can apply irreversible transformations to ensure sensitive data is no longer identifiable.
Implementation Example:
Use Cloud DLP API to scan incoming data.
Apply de-identification techniques:
Redaction: Remove sensitive data.
Tokenization: Replace data with a non-reversible token.
Format-preserving encryption (FPE): Protect structured data.
Store the de-identified data in Bigtable.
✅ Supporting Documentation:
Cloud DLP Overview:
https://cloud.google.com/dlp/docs/deidentify-sensitive-dataDLP Tokenization Guide:
https://cloud.google.com/dlp/docs/pseudonymization
Why Other Options Are Incorrect?
A. Hash all data using SHA-256. ❌
Hashing is irreversible, making it unsuitable for structured data like emails or credit card numbers that may need retrieval.
Does not prevent data exposure if attackers can use dictionary attacks or brute-force lookup tables.
DLP tokenization or redaction is a better approach.
✅ Reference:
https://cloud.google.com/security/encryption-in-transit
B. Encrypt all data using elliptic curve cryptography (ECC). ❌
Encryption protects data in transit and at rest but does not remove PII or PCI before storage.
Decryption keys must be managed, creating additional security concerns.
DLP API offers de-identification, which is more suitable for compliance.
✅ Reference:
https://cloud.google.com/security/encryption-at-rest
D. Use regular expressions to identify and redact phone numbers, email addresses, and credit card numbers. ❌
Regex-based solutions are error-prone and can miss complex patterns in unstructured text.
Cloud DLP API is more reliable as it uses machine learning to detect sensitive data.
Regex requires constant updates, while Cloud DLP automatically adapts to new data formats.
✅ Reference:
https://cloud.google.com/dlp/docs/concepts-infotypes
Conclusion:
The best approach is C. De-identify the data using the Cloud Data Loss Prevention (DLP) API, because it:
Automatically detects and removes PII/PCI.
Supports compliance with security regulations (GDPR, HIPAA, PCI DSS).
Is more robust and scalable than regex or encryption alone.
📌 Final Reference:
https://cloud.google.com/dlp/docs/deidentify-sensitive-data
Explanation
Correct Answer:
✅ C. De-identify the data using the Cloud Data Loss Prevention (DLP) API.
Explanation:
To comply with privacy and security requirements when storing personally identifiable information (PII) and payment card information (PCI) in Cloud Bigtable, the best approach is to use Google Cloud Data Loss Prevention (DLP) API for de-identification.
✅ Why Cloud DLP API?
Automatically detects and removes PII & PCI (e.g., names, emails, credit card numbers).
Supports tokenization, redaction, and pseudonymization, ensuring compliance with GDPR, HIPAA, and PCI DSS.
More reliable than regex-based solutions, which may miss complex data patterns.
Can apply irreversible transformations to ensure sensitive data is no longer identifiable.
Implementation Example:
Use Cloud DLP API to scan incoming data.
Apply de-identification techniques:
Redaction: Remove sensitive data.
Tokenization: Replace data with a non-reversible token.
Format-preserving encryption (FPE): Protect structured data.
Store the de-identified data in Bigtable.
✅ Supporting Documentation:
Cloud DLP Overview:
https://cloud.google.com/dlp/docs/deidentify-sensitive-dataDLP Tokenization Guide:
https://cloud.google.com/dlp/docs/pseudonymization
Why Other Options Are Incorrect?
A. Hash all data using SHA-256. ❌
Hashing is irreversible, making it unsuitable for structured data like emails or credit card numbers that may need retrieval.
Does not prevent data exposure if attackers can use dictionary attacks or brute-force lookup tables.
DLP tokenization or redaction is a better approach.
✅ Reference:
https://cloud.google.com/security/encryption-in-transit
B. Encrypt all data using elliptic curve cryptography (ECC). ❌
Encryption protects data in transit and at rest but does not remove PII or PCI before storage.
Decryption keys must be managed, creating additional security concerns.
DLP API offers de-identification, which is more suitable for compliance.
✅ Reference:
https://cloud.google.com/security/encryption-at-rest
D. Use regular expressions to identify and redact phone numbers, email addresses, and credit card numbers. ❌
Regex-based solutions are error-prone and can miss complex patterns in unstructured text.
Cloud DLP API is more reliable as it uses machine learning to detect sensitive data.
Regex requires constant updates, while Cloud DLP automatically adapts to new data formats.
✅ Reference:
https://cloud.google.com/dlp/docs/concepts-infotypes
Conclusion:
The best approach is C. De-identify the data using the Cloud Data Loss Prevention (DLP) API, because it:
Automatically detects and removes PII/PCI.
Supports compliance with security regulations (GDPR, HIPAA, PCI DSS).
Is more robust and scalable than regex or encryption alone.
📌 Final Reference:
https://cloud.google.com/dlp/docs/deidentify-sensitive-data
Question 19 Multiple Choice
The database administration team needs your assistance in improving the performance of their new MySQL database server running on Google Compute Engine. The database is used for importing and normalizing performance statistics and is hosted on a Debian Linux virtual machine with an n1-standard-8 configuration and 80 GB of SSD persistent disk. What should they modify to enhance performance?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
C. Dynamically resize the SSD persistent disk to 500 GB.
E. Optimize batch jobs to use bulk inserts into the database.
Explanation:
To improve MySQL performance on Google Compute Engine, two key optimizations are required:
Resize the SSD persistent disk to 500 GB (
Option C)Google SSD persistent disks offer higher IOPS with increased disk size.
An 80 GB SSD persistent disk provides ~15,000 IOPS, while a 500 GB SSD provides up to 30,000 IOPS, significantly improving read/write performance.
MySQL benefits from higher disk throughput, especially for large-scale data imports and query processing.
Google allows dynamic disk resizing without downtime, meaning this change can be applied immediately.
✅ Supporting Documentation:
Google Cloud SSD Persistent Disk Performance
https://cloud.google.com/compute/docs/disks/performanceResizing persistent disks in Google Cloud
https://cloud.google.com/compute/docs/disks/resize-persistent-disk
Optimize batch jobs to use bulk inserts into the database (
Option E)Bulk inserts reduce the number of disk writes and minimize transaction overhead, leading to faster import and normalization of performance data.
MySQL performs better when data is inserted in large batches instead of multiple individual insert statements.
Using
INSERT ... VALUES (...), (...)orLOAD DATA INFILEsignificantly improves insert speeds.
✅ Supporting Documentation:
Optimizing MySQL Performance for Batch Inserts
https://dev.mysql.com/doc/refman/8.0/en/insert-optimization.htmlMySQL Bulk Insert Best Practices
https://cloud.google.com/sql/docs/mysql/import-export/importing
Why Other Options Are Incorrect:
A. Increase the virtual machine’s memory to 64 GB. ❌
While more memory can improve query caching, this is not the primary bottleneck in this case.
The current n1-standard-8 VM already provides 30 GB RAM, which is sufficient for MySQL unless monitoring indicates high memory usage.
Storage and query optimization are more impactful for performance improvements in this scenario.
✅ Reference:
https://cloud.google.com/compute/docs/machine-types
B. Create a new virtual machine running PostgreSQL instead of MySQL. ❌
Migrating to PostgreSQL does not inherently improve performance—both databases have similar capabilities for large-scale data ingestion.
Migration requires significant rework, which is not a practical solution for immediate performance improvement.
✅ Reference:
https://cloud.google.com/sql/docs/postgres
D. Migrate the performance metrics warehouse to BigQuery. ❌
BigQuery is optimized for analytical queries, not for transactional workloads like MySQL.
If the database is used for normalizing transactional data, BigQuery is not a direct replacement—instead, MySQL tuning is required.
✅ Reference:
https://cloud.google.com/bigquery/docs/introduction
Conclusion:
To improve MySQL performance on Google Compute Engine:
Increase the SSD persistent disk size to 500 GB (
Option C) for better IOPS and disk performance.Optimize batch jobs to use bulk inserts (
Option E) to minimize transaction overhead and speed up data imports.
📌 Final References:
https://cloud.google.com/compute/docs/disks/performance
https://dev.mysql.com/doc/refman/8.0/en/insert-optimization.html
Explanation
Correct Answer:
C. Dynamically resize the SSD persistent disk to 500 GB.
E. Optimize batch jobs to use bulk inserts into the database.
Explanation:
To improve MySQL performance on Google Compute Engine, two key optimizations are required:
Resize the SSD persistent disk to 500 GB (
Option C)Google SSD persistent disks offer higher IOPS with increased disk size.
An 80 GB SSD persistent disk provides ~15,000 IOPS, while a 500 GB SSD provides up to 30,000 IOPS, significantly improving read/write performance.
MySQL benefits from higher disk throughput, especially for large-scale data imports and query processing.
Google allows dynamic disk resizing without downtime, meaning this change can be applied immediately.
✅ Supporting Documentation:
Google Cloud SSD Persistent Disk Performance
https://cloud.google.com/compute/docs/disks/performanceResizing persistent disks in Google Cloud
https://cloud.google.com/compute/docs/disks/resize-persistent-disk
Optimize batch jobs to use bulk inserts into the database (
Option E)Bulk inserts reduce the number of disk writes and minimize transaction overhead, leading to faster import and normalization of performance data.
MySQL performs better when data is inserted in large batches instead of multiple individual insert statements.
Using
INSERT ... VALUES (...), (...)orLOAD DATA INFILEsignificantly improves insert speeds.
✅ Supporting Documentation:
Optimizing MySQL Performance for Batch Inserts
https://dev.mysql.com/doc/refman/8.0/en/insert-optimization.htmlMySQL Bulk Insert Best Practices
https://cloud.google.com/sql/docs/mysql/import-export/importing
Why Other Options Are Incorrect:
A. Increase the virtual machine’s memory to 64 GB. ❌
While more memory can improve query caching, this is not the primary bottleneck in this case.
The current n1-standard-8 VM already provides 30 GB RAM, which is sufficient for MySQL unless monitoring indicates high memory usage.
Storage and query optimization are more impactful for performance improvements in this scenario.
✅ Reference:
https://cloud.google.com/compute/docs/machine-types
B. Create a new virtual machine running PostgreSQL instead of MySQL. ❌
Migrating to PostgreSQL does not inherently improve performance—both databases have similar capabilities for large-scale data ingestion.
Migration requires significant rework, which is not a practical solution for immediate performance improvement.
✅ Reference:
https://cloud.google.com/sql/docs/postgres
D. Migrate the performance metrics warehouse to BigQuery. ❌
BigQuery is optimized for analytical queries, not for transactional workloads like MySQL.
If the database is used for normalizing transactional data, BigQuery is not a direct replacement—instead, MySQL tuning is required.
✅ Reference:
https://cloud.google.com/bigquery/docs/introduction
Conclusion:
To improve MySQL performance on Google Compute Engine:
Increase the SSD persistent disk size to 500 GB (
Option C) for better IOPS and disk performance.Optimize batch jobs to use bulk inserts (
Option E) to minimize transaction overhead and speed up data imports.
📌 Final References:
https://cloud.google.com/compute/docs/disks/performance
https://dev.mysql.com/doc/refman/8.0/en/insert-optimization.html
Question 20 Multiple Choice
A lead engineer has developed a custom tool for deploying virtual machines in the legacy data center and now wants to migrate it to the cloud. You want to advocate for using Google Cloud Deployment Manager instead. What are two business risks associated with migrating to Cloud Deployment Manager? (Choose two.)
Explanation

Click "Show Answer" to see the explanation here
Correct Answers:
B. Cloud Deployment Manager APIs could be deprecated in the future.
C. Cloud Deployment Manager is unfamiliar to the company's engineers.
Explanation:
When advocating for Google Cloud Deployment Manager, it's important to consider business risks that could affect adoption and long-term sustainability.
✅ Why These Are Business Risks?
1️⃣ B. Cloud Deployment Manager APIs could be deprecated in the future.
Google occasionally deprecates services when better alternatives emerge.
Deployment Manager is not actively updated and Google recommends Terraform for infrastructure as code (IaC).
If Deployment Manager is deprecated, the company may need to migrate to Terraform or another tool, increasing costs and effort.
✅ Supporting Documentation:
https://cloud.google.com/deployment-manager/docs/moving-to-terraform
2️⃣ C. Cloud Deployment Manager is unfamiliar to the company's engineers.
Requires retraining and ramp-up time, impacting productivity.
The lead engineer already has a custom deployment tool, meaning resistance to change is possible.
Terraform is more widely adopted and supports multi-cloud deployments, making it a more future-proof choice.
✅ Supporting Documentation:
https://cloud.google.com/docs/terraform
Why Other Options Are Incorrect?
A. Cloud Deployment Manager relies on Python. ❌
Incorrect because Deployment Manager uses YAML and Jinja2 templates, with optional Python support.
Python is not mandatory—most configurations can be done in YAML alone.
✅ Reference:
https://cloud.google.com/deployment-manager/docs/configuration/supported-python-runtime
D. Cloud Deployment Manager requires a Google APIs service account to operate. ❌
Not a significant business risk because service accounts are a standard practice for infrastructure automation tools.
Other IaC tools like Terraform also require service accounts for authentication.
✅ Reference:
https://cloud.google.com/iam/docs/service-accounts
E. Cloud Deployment Manager can permanently delete cloud resources. ❌
Not a risk specific to Deployment Manager—all IaC tools (Terraform, Ansible, etc.) have the ability to delete resources.
Safe deletion mechanisms (e.g., preview mode) exist to prevent accidental resource loss.
✅ Reference:
https://cloud.google.com/deployment-manager/docs/deployments/updating-deployments
F. Cloud Deployment Manager only automates Google Cloud resources. ❌
Not necessarily a business risk if the company is fully using Google Cloud.
However, if multi-cloud compatibility is required, Terraform is a better choice.
✅ Reference:
https://cloud.google.com/docs/terraform
Conclusion:
The two key business risks of using Cloud Deployment Manager are:
Potential deprecation (Option B)
Lack of familiarity among engineers (Option C)
📌 Final Reference:
https://cloud.google.com/deployment-manager/docs/moving-to-terraform
Explanation
Correct Answers:
B. Cloud Deployment Manager APIs could be deprecated in the future.
C. Cloud Deployment Manager is unfamiliar to the company's engineers.
Explanation:
When advocating for Google Cloud Deployment Manager, it's important to consider business risks that could affect adoption and long-term sustainability.
✅ Why These Are Business Risks?
1️⃣ B. Cloud Deployment Manager APIs could be deprecated in the future.
Google occasionally deprecates services when better alternatives emerge.
Deployment Manager is not actively updated and Google recommends Terraform for infrastructure as code (IaC).
If Deployment Manager is deprecated, the company may need to migrate to Terraform or another tool, increasing costs and effort.
✅ Supporting Documentation:
https://cloud.google.com/deployment-manager/docs/moving-to-terraform
2️⃣ C. Cloud Deployment Manager is unfamiliar to the company's engineers.
Requires retraining and ramp-up time, impacting productivity.
The lead engineer already has a custom deployment tool, meaning resistance to change is possible.
Terraform is more widely adopted and supports multi-cloud deployments, making it a more future-proof choice.
✅ Supporting Documentation:
https://cloud.google.com/docs/terraform
Why Other Options Are Incorrect?
A. Cloud Deployment Manager relies on Python. ❌
Incorrect because Deployment Manager uses YAML and Jinja2 templates, with optional Python support.
Python is not mandatory—most configurations can be done in YAML alone.
✅ Reference:
https://cloud.google.com/deployment-manager/docs/configuration/supported-python-runtime
D. Cloud Deployment Manager requires a Google APIs service account to operate. ❌
Not a significant business risk because service accounts are a standard practice for infrastructure automation tools.
Other IaC tools like Terraform also require service accounts for authentication.
✅ Reference:
https://cloud.google.com/iam/docs/service-accounts
E. Cloud Deployment Manager can permanently delete cloud resources. ❌
Not a risk specific to Deployment Manager—all IaC tools (Terraform, Ansible, etc.) have the ability to delete resources.
Safe deletion mechanisms (e.g., preview mode) exist to prevent accidental resource loss.
✅ Reference:
https://cloud.google.com/deployment-manager/docs/deployments/updating-deployments
F. Cloud Deployment Manager only automates Google Cloud resources. ❌
Not necessarily a business risk if the company is fully using Google Cloud.
However, if multi-cloud compatibility is required, Terraform is a better choice.
✅ Reference:
https://cloud.google.com/docs/terraform
Conclusion:
The two key business risks of using Cloud Deployment Manager are:
Potential deprecation (Option B)
Lack of familiarity among engineers (Option C)
📌 Final Reference:
https://cloud.google.com/deployment-manager/docs/moving-to-terraform



