

Professional Cloud Architect - Google Cloud Certified Exam Questions
Total Questions
Last Updated
1st Try Guaranteed

Experts Verified
Question 1 Single Choice
A customer is migrating an existing corporate application from an on-premises data center to Google Cloud Platform. The business owners want to minimize user disruption during the transition. Additionally, the security team has strict requirements for storing passwords. Which authentication strategy should be implemented?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
B. Federate authentication via SAML 2.0 with the existing Identity Provider.
Explanation:
Federating authentication via SAML 2.0 with the existing Identity Provider (IdP) is the best strategy because:
Minimizes user disruption: Users can continue logging in with their corporate credentials without changing passwords or adopting new authentication methods.
Enhances security: Passwords remain stored and managed within the corporate IdP, meeting the security team’s strict requirements for password storage.
Supports Single Sign-On (SSO): Users can authenticate once and access Google Cloud resources without re-entering credentials.
Google Cloud supports SAML 2.0 integration with enterprise identity providers like Microsoft Active Directory, Okta, and Ping Identity.
✅ Supporting Documentation:
Setting up SAML for Google Cloud authentication:
https://cloud.google.com/identity/docs/setup/samlGoogle Identity Platform Federation Options:
https://cloud.google.com/identity/docs/federation
Why Other Options Are Incorrect:
A. Use G Suite Password Sync to replicate passwords into Google. ❌
Incorrect because Google does not store user passwords from external directories.
G Suite Password Sync is used to synchronize Active Directory (AD) passwords to Google Workspace, but it does not provide federated authentication.
The security team has strict password storage policies, and storing passwords in Google may not meet compliance requirements.
✅ Reference:
https://support.google.com/a/answer/2611859?hl=en
C. Provision users in Google using the Google Cloud Directory Sync tool. ❌
Incorrect because Google Cloud Directory Sync (GCDS) only synchronizes user accounts and groups but does not synchronize passwords.
GCDS does not provide authentication—it only mirrors identities from Active Directory or LDAP to Google Cloud.
This approach does not enable SSO, meaning users would have separate login credentials for Google Cloud, disrupting workflows.
✅ Reference:
https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-introduction
D. Require users to set their Google password to match their corporate password. ❌
Incorrect because:
Users would have to manage two separate passwords, increasing complexity and risk.
No enforcement mechanism ensures that users maintain password synchronization between corporate systems and Google.
Less secure than SAML federation, where authentication happens via a centralized IdP.
✅ Reference:
https://cloud.google.com/identity/docs/federation
Conclusion:
The best authentication strategy is B. Federate authentication via SAML 2.0 with the existing Identity Provider, because it:
Ensures seamless authentication without user disruption.
Meets security requirements by keeping passwords within the corporate system.
Supports Single Sign-On (SSO) for a better user experience.
📌 Final Reference:
https://cloud.google.com/identity/docs/setup/saml
Explanation
Correct Answer:
B. Federate authentication via SAML 2.0 with the existing Identity Provider.
Explanation:
Federating authentication via SAML 2.0 with the existing Identity Provider (IdP) is the best strategy because:
Minimizes user disruption: Users can continue logging in with their corporate credentials without changing passwords or adopting new authentication methods.
Enhances security: Passwords remain stored and managed within the corporate IdP, meeting the security team’s strict requirements for password storage.
Supports Single Sign-On (SSO): Users can authenticate once and access Google Cloud resources without re-entering credentials.
Google Cloud supports SAML 2.0 integration with enterprise identity providers like Microsoft Active Directory, Okta, and Ping Identity.
✅ Supporting Documentation:
Setting up SAML for Google Cloud authentication:
https://cloud.google.com/identity/docs/setup/samlGoogle Identity Platform Federation Options:
https://cloud.google.com/identity/docs/federation
Why Other Options Are Incorrect:
A. Use G Suite Password Sync to replicate passwords into Google. ❌
Incorrect because Google does not store user passwords from external directories.
G Suite Password Sync is used to synchronize Active Directory (AD) passwords to Google Workspace, but it does not provide federated authentication.
The security team has strict password storage policies, and storing passwords in Google may not meet compliance requirements.
✅ Reference:
https://support.google.com/a/answer/2611859?hl=en
C. Provision users in Google using the Google Cloud Directory Sync tool. ❌
Incorrect because Google Cloud Directory Sync (GCDS) only synchronizes user accounts and groups but does not synchronize passwords.
GCDS does not provide authentication—it only mirrors identities from Active Directory or LDAP to Google Cloud.
This approach does not enable SSO, meaning users would have separate login credentials for Google Cloud, disrupting workflows.
✅ Reference:
https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-introduction
D. Require users to set their Google password to match their corporate password. ❌
Incorrect because:
Users would have to manage two separate passwords, increasing complexity and risk.
No enforcement mechanism ensures that users maintain password synchronization between corporate systems and Google.
Less secure than SAML federation, where authentication happens via a centralized IdP.
✅ Reference:
https://cloud.google.com/identity/docs/federation
Conclusion:
The best authentication strategy is B. Federate authentication via SAML 2.0 with the existing Identity Provider, because it:
Ensures seamless authentication without user disruption.
Meets security requirements by keeping passwords within the corporate system.
Supports Single Sign-On (SSO) for a better user experience.
📌 Final Reference:
https://cloud.google.com/identity/docs/setup/saml
Question 2 Single Choice
You are designing an architecture for an application running on Compute Engine, ensuring it meets the disaster recovery plan requirement of failing over to another region in case of a regional outage.
What should you do?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
✅ C. Deploy the application on two Compute Engine instance groups, each in the same project but in different regions. Use the first instance group to serve traffic and configure HTTP Load Balancing to fail over to the standby instance group in case of a disaster.
Explanation:
To meet the disaster recovery plan requirement of failing over to another region in case of a regional outage, the best practice is to:
Deploy the application in multiple regions to ensure high availability.
Use Compute Engine instance groups instead of single instances for better scalability and fault tolerance.
Leverage HTTP Load Balancing with cross-region failover to direct traffic to a standby instance group when the primary region becomes unavailable.
✅ Why This Works:
Regional instance groups ensure that workloads can be distributed across multiple zones within a region.
Global HTTP(S) Load Balancing can automatically detect failures and route traffic to a healthy region.
Compute Engine Instance Groups provide auto-healing, load balancing, and automatic scaling.
Google Cloud's cross-region failover ensures traffic is rerouted seamlessly if the primary region fails.
Implementation Steps:
Deploy the application using two Compute Engine instance groups in different regions.
Configure HTTP(S) Load Balancing to route traffic to the primary region and fail over to the standby region if needed.
Enable health checks to detect when the primary instance group is down.
Set up Cloud DNS with a global frontend IP to ensure seamless routing.
✅ Supporting Documentation:
Google Cloud Disaster Recovery Planning:
https://cloud.google.com/architecture/dr-scenariosGlobal HTTP Load Balancing for Failover:
https://cloud.google.com/load-balancing/docs/https/global-ext-backend-failover
Why Other Options Are Incorrect?
A. Deploy the application on two Compute Engine instances in the same project but different regions. Use the first instance to serve traffic and configure HTTP Load Balancing to fail over to the standby instance in case of a disaster. ❌
Incorrect because individual Compute Engine instances do not provide auto-healing or scalability.
Instance Groups provide automatic instance replacement, which a single instance does not.
✅ Reference:
https://cloud.google.com/compute/docs/instance-groups
B. Deploy the application on a single Compute Engine instance to serve traffic and configure HTTP Load Balancing to fail over to an instance running on-premises in case of a disaster. ❌
Not recommended because it introduces dependencies on an on-premises system, increasing complexity and latency.
Google Cloud best practices recommend failing over to another cloud region, not an on-premises instance.
✅ Reference:
https://cloud.google.com/architecture/dr-scenarios
D. Deploy the application on two Compute Engine instance groups, each in a separate project and different region. Use the first instance group to serve traffic and configure HTTP Load Balancing to fail over to the standby instance group in case of a disaster. ❌
Using multiple projects adds unnecessary complexity—IAM, networking, and load balancing become harder to manage.
Best practice is to keep both instance groups within the same project for better policy and resource management.
✅ Reference:
https://cloud.google.com/load-balancing/docs/https/global-ext-backend-failover
Conclusion:
The best approach is C. Deploy the application on two Compute Engine instance groups, each in the same project but in different regions. Use the first instance group to serve traffic and configure HTTP Load Balancing to fail over to the standby instance group in case of a disaster.
✔ Ensures regional failover with minimal downtime
✔ Uses instance groups for scalability and auto-healing
✔ Leverages HTTP Load Balancing for intelligent traffic routing
📌 Final Reference:
https://cloud.google.com/load-balancing/docs/https/global-ext-backend-failover
Explanation
Correct Answer:
✅ C. Deploy the application on two Compute Engine instance groups, each in the same project but in different regions. Use the first instance group to serve traffic and configure HTTP Load Balancing to fail over to the standby instance group in case of a disaster.
Explanation:
To meet the disaster recovery plan requirement of failing over to another region in case of a regional outage, the best practice is to:
Deploy the application in multiple regions to ensure high availability.
Use Compute Engine instance groups instead of single instances for better scalability and fault tolerance.
Leverage HTTP Load Balancing with cross-region failover to direct traffic to a standby instance group when the primary region becomes unavailable.
✅ Why This Works:
Regional instance groups ensure that workloads can be distributed across multiple zones within a region.
Global HTTP(S) Load Balancing can automatically detect failures and route traffic to a healthy region.
Compute Engine Instance Groups provide auto-healing, load balancing, and automatic scaling.
Google Cloud's cross-region failover ensures traffic is rerouted seamlessly if the primary region fails.
Implementation Steps:
Deploy the application using two Compute Engine instance groups in different regions.
Configure HTTP(S) Load Balancing to route traffic to the primary region and fail over to the standby region if needed.
Enable health checks to detect when the primary instance group is down.
Set up Cloud DNS with a global frontend IP to ensure seamless routing.
✅ Supporting Documentation:
Google Cloud Disaster Recovery Planning:
https://cloud.google.com/architecture/dr-scenariosGlobal HTTP Load Balancing for Failover:
https://cloud.google.com/load-balancing/docs/https/global-ext-backend-failover
Why Other Options Are Incorrect?
A. Deploy the application on two Compute Engine instances in the same project but different regions. Use the first instance to serve traffic and configure HTTP Load Balancing to fail over to the standby instance in case of a disaster. ❌
Incorrect because individual Compute Engine instances do not provide auto-healing or scalability.
Instance Groups provide automatic instance replacement, which a single instance does not.
✅ Reference:
https://cloud.google.com/compute/docs/instance-groups
B. Deploy the application on a single Compute Engine instance to serve traffic and configure HTTP Load Balancing to fail over to an instance running on-premises in case of a disaster. ❌
Not recommended because it introduces dependencies on an on-premises system, increasing complexity and latency.
Google Cloud best practices recommend failing over to another cloud region, not an on-premises instance.
✅ Reference:
https://cloud.google.com/architecture/dr-scenarios
D. Deploy the application on two Compute Engine instance groups, each in a separate project and different region. Use the first instance group to serve traffic and configure HTTP Load Balancing to fail over to the standby instance group in case of a disaster. ❌
Using multiple projects adds unnecessary complexity—IAM, networking, and load balancing become harder to manage.
Best practice is to keep both instance groups within the same project for better policy and resource management.
✅ Reference:
https://cloud.google.com/load-balancing/docs/https/global-ext-backend-failover
Conclusion:
The best approach is C. Deploy the application on two Compute Engine instance groups, each in the same project but in different regions. Use the first instance group to serve traffic and configure HTTP Load Balancing to fail over to the standby instance group in case of a disaster.
✔ Ensures regional failover with minimal downtime
✔ Uses instance groups for scalability and auto-healing
✔ Leverages HTTP Load Balancing for intelligent traffic routing
📌 Final Reference:
https://cloud.google.com/load-balancing/docs/https/global-ext-backend-failover
Question 3 Single Choice
Your application is required to process credit card transactions. To minimize the scope of Payment Card Industry (PCI) compliance while still allowing analysis of transactional data and payment trends, how should you design your architecture?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
A. Implement a tokenization service and store only tokenized data.
Explanation:
To minimize the scope of PCI compliance while still allowing transactional analysis, the best practice is to implement tokenization.
Tokenization replaces sensitive credit card data (PANs) with non-sensitive tokens.
The actual credit card data is never stored in databases or logs, significantly reducing PCI DSS scope.
Transactional data and payment trends can be analyzed using tokenized values while maintaining security.
Google Cloud provides Cloud Data Loss Prevention (DLP) and third-party tokenization solutions that help achieve this.
✅ Supporting Documentation:
Google Cloud PCI Compliance Guide:
https://cloud.google.com/security/compliance/pci-dssTokenization and Encryption for PCI DSS Compliance:
https://cloud.google.com/security/articles/tokenization-and-encryption
Why Other Options Are Incorrect:
B. Use separate projects exclusively for processing credit card data. ❌
While separate projects improve isolation, they do not reduce PCI compliance scope.
If credit card data is stored or processed, all components must comply with PCI DSS, regardless of project separation.
Tokenization is a better approach because it eliminates the need to store credit card numbers at all.
✅ Reference:
https://cloud.google.com/security/compliance/pci-dss
C. Configure separate subnetworks to isolate components handling credit card data. ❌
Network segmentation helps limit PCI scope, but it does not eliminate the storage of sensitive data.
PCI DSS still applies to all in-scope network segments, requiring extensive auditing and security controls.
Tokenization is more effective in reducing compliance scope than just network isolation.
✅ Reference:
https://cloud.google.com/architecture/framework/security/design-secure-networks
D. Simplify the audit discovery process by labeling all virtual machines (VMs) that handle PCI data. ❌
Labeling VMs is helpful for tracking, but it does not minimize PCI scope.
If a VM processes actual credit card data, it remains in PCI scope regardless of labels.
Tokenization removes the need to store PCI-sensitive data, reducing compliance burden.
✅ Reference:
https://cloud.google.com/compute/docs/labeling-resources
E. Export logs to Google BigQuery and use ACLs and views to control data access for auditors. ❌
BigQuery ACLs and views help with access control, but they do not reduce PCI compliance scope.
If credit card numbers (PANs) are stored in logs or BigQuery, PCI DSS still applies.
Tokenization prevents sensitive data from being stored in logs or analytics tools.
✅ Reference:
https://cloud.google.com/bigquery/docs/reference/standard-sql/security
Conclusion:
The best approach is A. Implement a tokenization service and store only tokenized data, because:
PCI-sensitive data is never stored, minimizing compliance scope.
Transaction analysis remains possible using tokenized values.
Security and audit complexity are significantly reduced.
📌 Final Reference:
https://cloud.google.com/security/articles/tokenization-and-encryption
Explanation
Correct Answer:
A. Implement a tokenization service and store only tokenized data.
Explanation:
To minimize the scope of PCI compliance while still allowing transactional analysis, the best practice is to implement tokenization.
Tokenization replaces sensitive credit card data (PANs) with non-sensitive tokens.
The actual credit card data is never stored in databases or logs, significantly reducing PCI DSS scope.
Transactional data and payment trends can be analyzed using tokenized values while maintaining security.
Google Cloud provides Cloud Data Loss Prevention (DLP) and third-party tokenization solutions that help achieve this.
✅ Supporting Documentation:
Google Cloud PCI Compliance Guide:
https://cloud.google.com/security/compliance/pci-dssTokenization and Encryption for PCI DSS Compliance:
https://cloud.google.com/security/articles/tokenization-and-encryption
Why Other Options Are Incorrect:
B. Use separate projects exclusively for processing credit card data. ❌
While separate projects improve isolation, they do not reduce PCI compliance scope.
If credit card data is stored or processed, all components must comply with PCI DSS, regardless of project separation.
Tokenization is a better approach because it eliminates the need to store credit card numbers at all.
✅ Reference:
https://cloud.google.com/security/compliance/pci-dss
C. Configure separate subnetworks to isolate components handling credit card data. ❌
Network segmentation helps limit PCI scope, but it does not eliminate the storage of sensitive data.
PCI DSS still applies to all in-scope network segments, requiring extensive auditing and security controls.
Tokenization is more effective in reducing compliance scope than just network isolation.
✅ Reference:
https://cloud.google.com/architecture/framework/security/design-secure-networks
D. Simplify the audit discovery process by labeling all virtual machines (VMs) that handle PCI data. ❌
Labeling VMs is helpful for tracking, but it does not minimize PCI scope.
If a VM processes actual credit card data, it remains in PCI scope regardless of labels.
Tokenization removes the need to store PCI-sensitive data, reducing compliance burden.
✅ Reference:
https://cloud.google.com/compute/docs/labeling-resources
E. Export logs to Google BigQuery and use ACLs and views to control data access for auditors. ❌
BigQuery ACLs and views help with access control, but they do not reduce PCI compliance scope.
If credit card numbers (PANs) are stored in logs or BigQuery, PCI DSS still applies.
Tokenization prevents sensitive data from being stored in logs or analytics tools.
✅ Reference:
https://cloud.google.com/bigquery/docs/reference/standard-sql/security
Conclusion:
The best approach is A. Implement a tokenization service and store only tokenized data, because:
PCI-sensitive data is never stored, minimizing compliance scope.
Transaction analysis remains possible using tokenized values.
Security and audit complexity are significantly reduced.
📌 Final Reference:
https://cloud.google.com/security/articles/tokenization-and-encryption
Question 4 Single Choice
You have an application deployed on Google Kubernetes Engine (GKE) using a Deployment named echo-deployment. This deployment is exposed through a Service named echo-service. You need to update the application while ensuring minimal downtime.
What should you do?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
✅ A. Run kubectl set image deployment/echo-deployment <new-image> to update the deployment with the new container image.
Explanation:
To update the application with minimal downtime, you should use Kubernetes rolling updates, which gradually replace old pods with new ones while ensuring availability.
✅ Why Use kubectl set image?
Triggers a rolling update, ensuring minimal downtime.
Pods are updated gradually, avoiding service interruptions.
If an issue occurs, Kubernetes allows rollback to the previous version.
No need to delete and recreate resources, which could cause unnecessary downtime.
Command to Perform Rolling Update:
- kubectl set image deployment/echo-deployment echo-container=<new-image>
echo-deployment→ The name of the Kubernetes Deployment.echo-container=<new-image>→ Specifies the container to update with the new image.Kubernetes replaces pods gradually, ensuring seamless traffic flow.
✅ Supporting Documentation:
Kubernetes Rolling Updates:
https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/kubectl set imageReference:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#set-image
Why Other Options Are Incorrect?
B. Use the rolling update functionality of the Instance Group that supports the Kubernetes cluster. ❌
Incorrect because Kubernetes manages rolling updates at the Deployment level, not at the Instance Group level.
Instance Groups handle VM lifecycle management, not pod-level deployments.
✅ Reference:
https://cloud.google.com/kubernetes-engine/docs/concepts/managed-instance-groups
C. Update the Deployment YAML file with the new container image, then run kubectl delete deployment/echo-deployment followed by kubectl create -f <yaml-file>. ❌
Deleting the Deployment removes all running pods, causing downtime.
Best practice is to update the deployment using
kubectl applyorkubectl set image, which triggers a rolling update.
✅ Reference:
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment
D. Update the Service YAML file with the new container image, then run kubectl delete service/echo-service followed by kubectl create -f <yaml-file>. ❌
Services do not manage container images—they manage network access to Pods.
Updating a Service does not update the Deployment or Pods.
✅ Reference:
https://kubernetes.io/docs/concepts/services-networking/service/
Conclusion:
The best approach is A. Run kubectl set image deployment/echo-deployment <new-image> to update the deployment with the new container image, because:
✔ It ensures a rolling update with minimal downtime.
✔ Kubernetes gradually replaces old pods with new ones.
✔ Allows easy rollback if needed.
📌 Final Reference:
https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
Explanation
Correct Answer:
✅ A. Run kubectl set image deployment/echo-deployment <new-image> to update the deployment with the new container image.
Explanation:
To update the application with minimal downtime, you should use Kubernetes rolling updates, which gradually replace old pods with new ones while ensuring availability.
✅ Why Use kubectl set image?
Triggers a rolling update, ensuring minimal downtime.
Pods are updated gradually, avoiding service interruptions.
If an issue occurs, Kubernetes allows rollback to the previous version.
No need to delete and recreate resources, which could cause unnecessary downtime.
Command to Perform Rolling Update:
- kubectl set image deployment/echo-deployment echo-container=<new-image>
echo-deployment→ The name of the Kubernetes Deployment.echo-container=<new-image>→ Specifies the container to update with the new image.Kubernetes replaces pods gradually, ensuring seamless traffic flow.
✅ Supporting Documentation:
Kubernetes Rolling Updates:
https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/kubectl set imageReference:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#set-image
Why Other Options Are Incorrect?
B. Use the rolling update functionality of the Instance Group that supports the Kubernetes cluster. ❌
Incorrect because Kubernetes manages rolling updates at the Deployment level, not at the Instance Group level.
Instance Groups handle VM lifecycle management, not pod-level deployments.
✅ Reference:
https://cloud.google.com/kubernetes-engine/docs/concepts/managed-instance-groups
C. Update the Deployment YAML file with the new container image, then run kubectl delete deployment/echo-deployment followed by kubectl create -f <yaml-file>. ❌
Deleting the Deployment removes all running pods, causing downtime.
Best practice is to update the deployment using
kubectl applyorkubectl set image, which triggers a rolling update.
✅ Reference:
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment
D. Update the Service YAML file with the new container image, then run kubectl delete service/echo-service followed by kubectl create -f <yaml-file>. ❌
Services do not manage container images—they manage network access to Pods.
Updating a Service does not update the Deployment or Pods.
✅ Reference:
https://kubernetes.io/docs/concepts/services-networking/service/
Conclusion:
The best approach is A. Run kubectl set image deployment/echo-deployment <new-image> to update the deployment with the new container image, because:
✔ It ensures a rolling update with minimal downtime.
✔ Kubernetes gradually replaces old pods with new ones.
✔ Allows easy rollback if needed.
📌 Final Reference:
https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
Question 5 Single Choice
Your company is using BigQuery as its enterprise data warehouse, with data distributed across multiple Google Cloud projects. All BigQuery queries must be billed to a single project, and no query costs should be incurred on the projects containing the data. Users should be able to query the datasets but not modify them.
How should you configure users' access roles?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
✅ C. Add all users to a group, then grant the group BigQuery Job User on the billing project and BigQuery Data Viewer on the projects containing the data.
Explanation:
The requirement is that:
All BigQuery queries must be billed to a single project (the billing project).
No query costs should be incurred on the projects containing the data.
Users must be able to query datasets but not modify them.
Role Breakdown for Proper Access Control:
✅ 1. BigQuery Job User (roles/bigquery.jobUser) on the billing project:
Required for running queries on behalf of the billing project.
Ensures that all query costs are billed to this project.
Users must have this role on the billing project to execute queries.
✅ 2. BigQuery Data Viewer (roles/bigquery.dataViewer) on the projects containing the data:
Allows users to read/query datasets in these projects.
Prevents users from modifying or deleting data.
Ensures that users can access the data without incurring billing in those projects.
Implementation Steps:
Create a Google Cloud IAM group (e.g.,
bigquery-users-group).Assign the group the following roles:
roles/bigquery.jobUseron the billing project.roles/bigquery.dataVieweron the projects containing the data.
Add users to the group.
Users now run queries using the billing project but access data from other projects.
✅ Supporting Documentation:
BigQuery Job User Role (
roles/bigquery.jobUser):
https://cloud.google.com/bigquery/docs/access-controlBigQuery Data Viewer Role (
roles/bigquery.dataViewer):
https://cloud.google.com/bigquery/docs/reference/roles
Why Other Options Are Incorrect?
A. Add all users to a group, then grant the group BigQuery User on the billing project and BigQuery Data Viewer on the projects containing the data. ❌
BigQuery User(roles/bigquery.user) allows users to create datasets and tables in the billing project, which is not required.Users only need
BigQuery Job Userto run queries using the billing project.
✅ Reference:
https://cloud.google.com/bigquery/docs/reference/roles
B. Add all users to a group, then grant the group BigQuery Data Viewer on the billing project and BigQuery User on the projects containing the data. ❌
BigQuery Data Vieweron the billing project does not allow users to run queries—it only allows viewing existing datasets.BigQuery Useron the data projects allows users to create datasets/tables, which is unnecessary.
✅ Reference:
https://cloud.google.com/bigquery/docs/reference/roles
D. Add all users to a group, then grant the group BigQuery Data Viewer on the billing project and BigQuery Job User on the projects containing the data. ❌
Incorrect because
BigQuery Job Usermust be granted on the billing project, not on the data projects.Users would not be able to run queries using the billing project, leading to errors.
✅ Reference:
https://cloud.google.com/bigquery/docs/reference/roles
Conclusion:
The best approach is C. Add all users to a group, then grant the group BigQuery Job User on the billing project and BigQuery Data Viewer on the projects containing the data, because:
✔ Ensures all queries are billed to the correct project.
✔ Allows users to read/query data but not modify it.
✔ Follows Google Cloud's recommended best practices for cross-project BigQuery access.
📌 Final Reference:
https://cloud.google.com/bigquery/docs/access-control
Explanation
Correct Answer:
✅ C. Add all users to a group, then grant the group BigQuery Job User on the billing project and BigQuery Data Viewer on the projects containing the data.
Explanation:
The requirement is that:
All BigQuery queries must be billed to a single project (the billing project).
No query costs should be incurred on the projects containing the data.
Users must be able to query datasets but not modify them.
Role Breakdown for Proper Access Control:
✅ 1. BigQuery Job User (roles/bigquery.jobUser) on the billing project:
Required for running queries on behalf of the billing project.
Ensures that all query costs are billed to this project.
Users must have this role on the billing project to execute queries.
✅ 2. BigQuery Data Viewer (roles/bigquery.dataViewer) on the projects containing the data:
Allows users to read/query datasets in these projects.
Prevents users from modifying or deleting data.
Ensures that users can access the data without incurring billing in those projects.
Implementation Steps:
Create a Google Cloud IAM group (e.g.,
bigquery-users-group).Assign the group the following roles:
roles/bigquery.jobUseron the billing project.roles/bigquery.dataVieweron the projects containing the data.
Add users to the group.
Users now run queries using the billing project but access data from other projects.
✅ Supporting Documentation:
BigQuery Job User Role (
roles/bigquery.jobUser):
https://cloud.google.com/bigquery/docs/access-controlBigQuery Data Viewer Role (
roles/bigquery.dataViewer):
https://cloud.google.com/bigquery/docs/reference/roles
Why Other Options Are Incorrect?
A. Add all users to a group, then grant the group BigQuery User on the billing project and BigQuery Data Viewer on the projects containing the data. ❌
BigQuery User(roles/bigquery.user) allows users to create datasets and tables in the billing project, which is not required.Users only need
BigQuery Job Userto run queries using the billing project.
✅ Reference:
https://cloud.google.com/bigquery/docs/reference/roles
B. Add all users to a group, then grant the group BigQuery Data Viewer on the billing project and BigQuery User on the projects containing the data. ❌
BigQuery Data Vieweron the billing project does not allow users to run queries—it only allows viewing existing datasets.BigQuery Useron the data projects allows users to create datasets/tables, which is unnecessary.
✅ Reference:
https://cloud.google.com/bigquery/docs/reference/roles
D. Add all users to a group, then grant the group BigQuery Data Viewer on the billing project and BigQuery Job User on the projects containing the data. ❌
Incorrect because
BigQuery Job Usermust be granted on the billing project, not on the data projects.Users would not be able to run queries using the billing project, leading to errors.
✅ Reference:
https://cloud.google.com/bigquery/docs/reference/roles
Conclusion:
The best approach is C. Add all users to a group, then grant the group BigQuery Job User on the billing project and BigQuery Data Viewer on the projects containing the data, because:
✔ Ensures all queries are billed to the correct project.
✔ Allows users to read/query data but not modify it.
✔ Follows Google Cloud's recommended best practices for cross-project BigQuery access.
📌 Final Reference:
https://cloud.google.com/bigquery/docs/access-control
Question 6 Single Choice
You are building a continuous deployment pipeline for a project stored in a Git repository. You need to ensure that code changes are verified before being deployed to production.
What should you do?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
✅ D. Use Jenkins to monitor repository tags, deploy staging tags to a staging environment for testing, and after successful testing, tag the repository for production deployment.
Explanation:
To ensure that code changes are verified before being deployed to production, a controlled deployment pipeline with a staging environment is required.
✅ Why This Works:
Jenkins automates the build and deployment process, reducing human error.
Using repository tags for staged deployments ensures only tested and verified code is deployed to production.
Deploying to a staging environment first allows for testing before production rollout.
Production deployments are triggered only when a repository is tagged for release, ensuring consistency.
Implementation Steps:
Monitor Git repository for new tags using Jenkins.
Deploy staging tags to a staging environment for integration and functional testing.
Run automated tests and manual verification in staging.
If staging tests pass, tag the repository with a production tag.
Jenkins deploys production tags to production.
✅ Supporting Documentation:
Jenkins Git Integration for Tag-Based Deployments:
https://www.jenkins.io/doc/book/pipeline/scmCI/CD Best Practices for Jenkins:
https://cloud.google.com/architecture/continuous-deployment-with-jenkins
Why Other Options Are Incorrect?
A. Use Spinnaker to deploy builds to production using the red/black deployment strategy, allowing easy rollbacks if needed. ❌
Red/black (blue/green) deployment helps with rollback but does not verify code before deployment.
The requirement is to verify before production, not just to handle rollbacks.
✅ Reference:
https://spinnaker.io/docs/concepts/deployment-strategies
B. Use Spinnaker to deploy builds directly to production and run tests on the production deployment. ❌
Testing should be done in a staging environment first, not in production.
Running tests directly in production risks deploying unverified code to users.
✅ Reference:
https://spinnaker.io/docs/setup/ci/jenkins
C. Use Jenkins to build both the staging branches and the master branch, then deploy changes to 10% of users before performing a full rollout. ❌
Deploying changes to 10% of users first (canary deployment) is useful, but does not ensure that code is properly verified before any production deployment.
The question requires full verification before deployment, which staging and tagging provide.
✅ Reference:
https://cloud.google.com/architecture/canary-deployments-on-gke
Conclusion:
The best approach is D. Use Jenkins to monitor repository tags, deploy staging tags to a staging environment for testing, and after successful testing, tag the repository for production deployment, because:
It ensures code is tested before production deployment.
It provides a structured, controlled release process.
It prevents unverified changes from reaching production.
📌 Final Reference:
https://cloud.google.com/architecture/continuous-deployment-with-jenkins
Explanation
Correct Answer:
✅ D. Use Jenkins to monitor repository tags, deploy staging tags to a staging environment for testing, and after successful testing, tag the repository for production deployment.
Explanation:
To ensure that code changes are verified before being deployed to production, a controlled deployment pipeline with a staging environment is required.
✅ Why This Works:
Jenkins automates the build and deployment process, reducing human error.
Using repository tags for staged deployments ensures only tested and verified code is deployed to production.
Deploying to a staging environment first allows for testing before production rollout.
Production deployments are triggered only when a repository is tagged for release, ensuring consistency.
Implementation Steps:
Monitor Git repository for new tags using Jenkins.
Deploy staging tags to a staging environment for integration and functional testing.
Run automated tests and manual verification in staging.
If staging tests pass, tag the repository with a production tag.
Jenkins deploys production tags to production.
✅ Supporting Documentation:
Jenkins Git Integration for Tag-Based Deployments:
https://www.jenkins.io/doc/book/pipeline/scmCI/CD Best Practices for Jenkins:
https://cloud.google.com/architecture/continuous-deployment-with-jenkins
Why Other Options Are Incorrect?
A. Use Spinnaker to deploy builds to production using the red/black deployment strategy, allowing easy rollbacks if needed. ❌
Red/black (blue/green) deployment helps with rollback but does not verify code before deployment.
The requirement is to verify before production, not just to handle rollbacks.
✅ Reference:
https://spinnaker.io/docs/concepts/deployment-strategies
B. Use Spinnaker to deploy builds directly to production and run tests on the production deployment. ❌
Testing should be done in a staging environment first, not in production.
Running tests directly in production risks deploying unverified code to users.
✅ Reference:
https://spinnaker.io/docs/setup/ci/jenkins
C. Use Jenkins to build both the staging branches and the master branch, then deploy changes to 10% of users before performing a full rollout. ❌
Deploying changes to 10% of users first (canary deployment) is useful, but does not ensure that code is properly verified before any production deployment.
The question requires full verification before deployment, which staging and tagging provide.
✅ Reference:
https://cloud.google.com/architecture/canary-deployments-on-gke
Conclusion:
The best approach is D. Use Jenkins to monitor repository tags, deploy staging tags to a staging environment for testing, and after successful testing, tag the repository for production deployment, because:
It ensures code is tested before production deployment.
It provides a structured, controlled release process.
It prevents unverified changes from reaching production.
📌 Final Reference:
https://cloud.google.com/architecture/continuous-deployment-with-jenkins
Question 7 Single Choice
During a high-traffic period, one of your relational databases crashes, but the replica is never promoted to master. You want to prevent this issue from happening again in the future. What should you do?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
D. Implement routinely scheduled failovers for your databases.
Explanation:
To ensure high availability and automatic failover for relational databases, routinely scheduled failover testing is essential. This ensures that replica promotion works as expected during high-traffic periods or failures.
✅ Why This Works:
Regular failover tests validate automatic promotion of replicas, ensuring the process works during actual failures.
Helps identify misconfigurations in failover mechanisms, such as missing IAM permissions or misconfigured replication settings.
Prevents downtime during real failures by ensuring read replicas can be promoted to primary.
Google Cloud SQL, Cloud Spanner, and other managed relational databases support automated failover testing.
✅ Supporting Documentation:
High Availability for Cloud SQL:
https://cloud.google.com/sql/docs/mysql/high-availabilityTesting Cloud SQL Failover:
https://cloud.google.com/sql/docs/mysql/failover
Why Other Options Are Incorrect:
A. Switch to a different database. ❌
Changing the database is not necessary—Google Cloud SQL, Cloud Spanner, and self-managed databases already support failover.
The issue is likely a misconfiguration or untested failover process, not the database itself.
Fixing failover procedures is a better solution than migrating databases.
✅ Reference:
https://cloud.google.com/sql/docs/mysql/replication
B. Use larger instance sizes for the database. ❌
Scaling the database may improve performance but does not guarantee failover works correctly.
Failover is unrelated to instance size—a misconfigured failover will still fail, even with larger instances.
High availability and replication settings are more critical than instance size.
✅ Reference:
https://cloud.google.com/sql/docs/mysql/instance-settings
C. Increase the frequency of database snapshots. ❌
Snapshots help with backups but do not prevent failover issues.
Snapshots do not provide real-time failover—they are used for recovery, which can lead to downtime during restoration.
Failover should be handled via automated replica promotion, not snapshots.
✅ Reference:
https://cloud.google.com/sql/docs/mysql/backup-recovery/backups
Conclusion:
The best solution is D. Implement routinely scheduled failovers, as it:
Ensures automatic failover works as expected.
Prevents downtime during real failures.
Identifies and resolves failover misconfigurations in advance.
📌 Final Reference:
https://cloud.google.com/sql/docs/mysql/failover
Explanation
Correct Answer:
D. Implement routinely scheduled failovers for your databases.
Explanation:
To ensure high availability and automatic failover for relational databases, routinely scheduled failover testing is essential. This ensures that replica promotion works as expected during high-traffic periods or failures.
✅ Why This Works:
Regular failover tests validate automatic promotion of replicas, ensuring the process works during actual failures.
Helps identify misconfigurations in failover mechanisms, such as missing IAM permissions or misconfigured replication settings.
Prevents downtime during real failures by ensuring read replicas can be promoted to primary.
Google Cloud SQL, Cloud Spanner, and other managed relational databases support automated failover testing.
✅ Supporting Documentation:
High Availability for Cloud SQL:
https://cloud.google.com/sql/docs/mysql/high-availabilityTesting Cloud SQL Failover:
https://cloud.google.com/sql/docs/mysql/failover
Why Other Options Are Incorrect:
A. Switch to a different database. ❌
Changing the database is not necessary—Google Cloud SQL, Cloud Spanner, and self-managed databases already support failover.
The issue is likely a misconfiguration or untested failover process, not the database itself.
Fixing failover procedures is a better solution than migrating databases.
✅ Reference:
https://cloud.google.com/sql/docs/mysql/replication
B. Use larger instance sizes for the database. ❌
Scaling the database may improve performance but does not guarantee failover works correctly.
Failover is unrelated to instance size—a misconfigured failover will still fail, even with larger instances.
High availability and replication settings are more critical than instance size.
✅ Reference:
https://cloud.google.com/sql/docs/mysql/instance-settings
C. Increase the frequency of database snapshots. ❌
Snapshots help with backups but do not prevent failover issues.
Snapshots do not provide real-time failover—they are used for recovery, which can lead to downtime during restoration.
Failover should be handled via automated replica promotion, not snapshots.
✅ Reference:
https://cloud.google.com/sql/docs/mysql/backup-recovery/backups
Conclusion:
The best solution is D. Implement routinely scheduled failovers, as it:
Ensures automatic failover works as expected.
Prevents downtime during real failures.
Identifies and resolves failover misconfigurations in advance.
📌 Final Reference:
https://cloud.google.com/sql/docs/mysql/failover
Question 8 Single Choice
You have written a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. However, the script is displaying errors indicating that it cannot connect to BigQuery. What should you do to resolve this issue?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
C. Create a new service account with BigQuery access and execute the script using that account.
Explanation:
If your Python script cannot connect to BigQuery from a Google Compute Engine (GCE) virtual machine, the most common reason is insufficient IAM permissions or missing authentication credentials.
✅ Creating and using a service account with BigQuery access ensures that the script has the necessary authentication and authorization to interact with BigQuery.
Steps to Fix the Issue:
Create a Service Account in Google Cloud IAM:
Assign the BigQuery User or BigQuery Data Viewer/Editor role, depending on required permissions.
Generate a JSON Key File for the service account and securely store it on the GCE instance.
Set the Authentication Environment Variable before running the script:
- texport GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"
Ensure the Python script uses the service account credentials via
google.auth:- from google.cloud import bigquery
- client = bigquery.Client()
If the
GOOGLE_APPLICATION_CREDENTIALSvariable is set correctly, BigQuery will authenticate using the service account.
✅ Supporting Documentation:
Authenticating to BigQuery:
https://cloud.google.com/bigquery/docs/authentication/service-account-fileService Account IAM Roles:
https://cloud.google.com/iam/docs/understanding-roles
Why Other Options Are Incorrect:
A. Install the latest BigQuery API client library for Python. ❌
Installing the
google-cloud-bigquerylibrary is necessary but not sufficient to fix authentication issues.The error suggests a connection issue, which is usually related to authentication, not missing libraries.
✅ Reference:
https://cloud.google.com/python/docs/reference/bigquery/latest
B. Run the script on a new virtual machine with the BigQuery access scope enabled. ❌
While enabling BigQuery API access scopes on a VM can help, the default Compute Engine service account may still lack sufficient IAM permissions.
The best practice is to use a dedicated service account with explicit BigQuery permissions rather than relying on access scopes alone.
✅ Reference:
https://cloud.google.com/compute/docs/access/service-accounts
D. Install the BigQuery (bq) component for gcloud using the command gcloud components install bq. ❌
The
bqcommand-line tool is useful for manually running queries, but it does not affect a Python script’s ability to authenticate and connect to BigQuery.The Python script requires authentication using the Google Cloud SDK or a service account rather than just the
bqCLI tool.
✅ Reference:
https://cloud.google.com/sdk/gcloud/reference/components/install
Conclusion:
The best way to resolve the authentication issue in a Python script connecting to BigQuery from a GCE instance is:
Create a new service account with BigQuery access.
Generate a JSON key file for authentication.
Set the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable.
📌 Final Reference:
https://cloud.google.com/bigquery/docs/authentication/service-account-file
Explanation
Correct Answer:
C. Create a new service account with BigQuery access and execute the script using that account.
Explanation:
If your Python script cannot connect to BigQuery from a Google Compute Engine (GCE) virtual machine, the most common reason is insufficient IAM permissions or missing authentication credentials.
✅ Creating and using a service account with BigQuery access ensures that the script has the necessary authentication and authorization to interact with BigQuery.
Steps to Fix the Issue:
Create a Service Account in Google Cloud IAM:
Assign the BigQuery User or BigQuery Data Viewer/Editor role, depending on required permissions.
Generate a JSON Key File for the service account and securely store it on the GCE instance.
Set the Authentication Environment Variable before running the script:
- texport GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"
Ensure the Python script uses the service account credentials via
google.auth:- from google.cloud import bigquery
- client = bigquery.Client()
If the
GOOGLE_APPLICATION_CREDENTIALSvariable is set correctly, BigQuery will authenticate using the service account.
✅ Supporting Documentation:
Authenticating to BigQuery:
https://cloud.google.com/bigquery/docs/authentication/service-account-fileService Account IAM Roles:
https://cloud.google.com/iam/docs/understanding-roles
Why Other Options Are Incorrect:
A. Install the latest BigQuery API client library for Python. ❌
Installing the
google-cloud-bigquerylibrary is necessary but not sufficient to fix authentication issues.The error suggests a connection issue, which is usually related to authentication, not missing libraries.
✅ Reference:
https://cloud.google.com/python/docs/reference/bigquery/latest
B. Run the script on a new virtual machine with the BigQuery access scope enabled. ❌
While enabling BigQuery API access scopes on a VM can help, the default Compute Engine service account may still lack sufficient IAM permissions.
The best practice is to use a dedicated service account with explicit BigQuery permissions rather than relying on access scopes alone.
✅ Reference:
https://cloud.google.com/compute/docs/access/service-accounts
D. Install the BigQuery (bq) component for gcloud using the command gcloud components install bq. ❌
The
bqcommand-line tool is useful for manually running queries, but it does not affect a Python script’s ability to authenticate and connect to BigQuery.The Python script requires authentication using the Google Cloud SDK or a service account rather than just the
bqCLI tool.
✅ Reference:
https://cloud.google.com/sdk/gcloud/reference/components/install
Conclusion:
The best way to resolve the authentication issue in a Python script connecting to BigQuery from a GCE instance is:
Create a new service account with BigQuery access.
Generate a JSON key file for authentication.
Set the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable.
📌 Final Reference:
https://cloud.google.com/bigquery/docs/authentication/service-account-file
Question 9 Multiple Choice
Your company wants to explore cloud adoption with minimal risk. As a first step, they plan to archive approximately 100 TB of log data in the cloud for testing analytics capabilities while also retaining the data as a long-term disaster recovery backup.
Which two actions should you take? (Choose two.)
Explanation

Click "Show Answer" to see the explanation here
Correct Answers:
✅ A. Load the logs into Google BigQuery.
✅ E. Upload the log files into Google Cloud Storage.
Explanation:
Your company wants to archive 100 TB of log data in Google Cloud for:
Testing analytics capabilities.
Long-term disaster recovery backup.
The best approach is to use Google Cloud Storage (for cost-effective archiving) and Google BigQuery (for analytics).
Why These Two Options Are Correct?
1️⃣ E. Upload the log files into Google Cloud Storage. ✅
Cloud Storage is the best choice for long-term log storage and disaster recovery.
Low-cost storage options:
Coldline or Archive storage is optimized for infrequent access but retains data securely.
Durable, highly available, and scalable storage solution for 100 TB of logs.
✅ Supporting Documentation:
Cloud Storage Classes:
https://cloud.google.com/storage/docs/storage-classes
2️⃣ A. Load the logs into Google BigQuery. ✅
BigQuery is an ideal tool for analytics on large-scale log data.
Supports real-time and batch queries without requiring pre-indexing.
Cost-efficient for on-demand queries while keeping raw logs in Cloud Storage.
✅ Supporting Documentation:
BigQuery for Log Analytics:
https://cloud.google.com/bigquery/docs/reference/auditlogs
Why Other Options Are Incorrect?
B. Load the logs into Google Cloud SQL. ❌
Not suitable for large-scale log storage (Cloud SQL has a 10 TB limit per instance).
Relational databases are inefficient for log analytics compared to BigQuery.
✅ Reference:
https://cloud.google.com/sql/docs/introduction
C. Import the logs into Google Stackdriver. ❌
Stackdriver (Cloud Logging) is for monitoring, not long-term archiving.
Expensive for storing 100 TB of logs (designed for real-time log management, not historical analytics).
Best for operational monitoring, not disaster recovery.
✅ Reference:
https://cloud.google.com/logging/docs/export
D. Insert the logs into Google Cloud Bigtable. ❌
Bigtable is optimized for time-series and real-time operational data, not long-term log storage.
More complex to query compared to BigQuery.
Does not provide built-in analytics for large-scale log analysis.
✅ Reference:
https://cloud.google.com/bigtable/docs/overview
Conclusion:
To archive and analyze 100 TB of logs, the best approach is:
✔ Store logs in Google Cloud Storage for low-cost, durable backup.
✔ Load logs into Google BigQuery for powerful, scalable analytics.
📌 Final References:
Cloud Storage Classes
BigQuery for Log Analytics
Explanation
Correct Answers:
✅ A. Load the logs into Google BigQuery.
✅ E. Upload the log files into Google Cloud Storage.
Explanation:
Your company wants to archive 100 TB of log data in Google Cloud for:
Testing analytics capabilities.
Long-term disaster recovery backup.
The best approach is to use Google Cloud Storage (for cost-effective archiving) and Google BigQuery (for analytics).
Why These Two Options Are Correct?
1️⃣ E. Upload the log files into Google Cloud Storage. ✅
Cloud Storage is the best choice for long-term log storage and disaster recovery.
Low-cost storage options:
Coldline or Archive storage is optimized for infrequent access but retains data securely.
Durable, highly available, and scalable storage solution for 100 TB of logs.
✅ Supporting Documentation:
Cloud Storage Classes:
https://cloud.google.com/storage/docs/storage-classes
2️⃣ A. Load the logs into Google BigQuery. ✅
BigQuery is an ideal tool for analytics on large-scale log data.
Supports real-time and batch queries without requiring pre-indexing.
Cost-efficient for on-demand queries while keeping raw logs in Cloud Storage.
✅ Supporting Documentation:
BigQuery for Log Analytics:
https://cloud.google.com/bigquery/docs/reference/auditlogs
Why Other Options Are Incorrect?
B. Load the logs into Google Cloud SQL. ❌
Not suitable for large-scale log storage (Cloud SQL has a 10 TB limit per instance).
Relational databases are inefficient for log analytics compared to BigQuery.
✅ Reference:
https://cloud.google.com/sql/docs/introduction
C. Import the logs into Google Stackdriver. ❌
Stackdriver (Cloud Logging) is for monitoring, not long-term archiving.
Expensive for storing 100 TB of logs (designed for real-time log management, not historical analytics).
Best for operational monitoring, not disaster recovery.
✅ Reference:
https://cloud.google.com/logging/docs/export
D. Insert the logs into Google Cloud Bigtable. ❌
Bigtable is optimized for time-series and real-time operational data, not long-term log storage.
More complex to query compared to BigQuery.
Does not provide built-in analytics for large-scale log analysis.
✅ Reference:
https://cloud.google.com/bigtable/docs/overview
Conclusion:
To archive and analyze 100 TB of logs, the best approach is:
✔ Store logs in Google Cloud Storage for low-cost, durable backup.
✔ Load logs into Google BigQuery for powerful, scalable analytics.
📌 Final References:
Cloud Storage Classes
BigQuery for Log Analytics
Question 10 Single Choice
You have built an application using Cloud ML Engine that can recognize famous paintings from uploaded images. You need to test the application and allow specific users to upload images for the next 24 hours. Some users do not have a Google Account.
How should you enable users to upload images?
Explanation

Click "Show Answer" to see the explanation here
Correct Answer:
✅ B. Have users upload images to Cloud Storage using a signed URL that expires after 24 hours.
Explanation:
Since some users do not have a Google Account, you need a temporary, secure, and anonymous method for them to upload images. The best approach is to use Cloud Storage signed URLs, which allow users to upload files without requiring authentication while enforcing time-limited access.
Why Signed URLs?
Allows users without a Google Account to upload files.
Supports expiration time (e.g., 24 hours) for controlled access.
Enforces security via cryptographic signing (HMAC-SHA256).
Users do not need IAM roles or Cloud Identity accounts.
Implementation Steps:
Generate a signed URL for each user using
gsutilorgcloud:- bash
- CopyEditgcloud storage sign-url gs://your-bucket-name --duration=24h
Share the signed URL with users.
Users upload images via the signed URL using
curlor a simple web form.After 24 hours, the URL expires, preventing further uploads.
✅ Supporting Documentation:
Using Signed URLs in Cloud Storage:
https://cloud.google.com/storage/docs/access-control/signed-urls
Why Other Options Are Incorrect?
A. Have users upload images to Cloud Storage and protect the bucket with a password that expires after 24 hours. ❌
Cloud Storage does not support password protection—access is managed via IAM roles or signed URLs.
Requires users to have a Google Account, which some do not.
✅ Reference:
https://cloud.google.com/storage/docs/access-control
C. Create an App Engine web application for users to upload images and configure App Engine to disable the application after 24 hours. Authenticate users via Cloud Identity. ❌
Cloud Identity requires users to have a Google Account, which some do not.
Disabling an App Engine app does not automatically revoke access to uploaded files.
✅ Reference:
https://cloud.google.com/identity/docs/access-control
D. Create an App Engine web application where users can upload images for the next 24 hours. Authenticate users via Cloud Identity. ❌
Requires users to have a Google Account, which some do not.
More complex to implement than a signed URL solution.
✅ Reference:
https://cloud.google.com/identity/docs
Conclusion:
The best approach is B. Have users upload images to Cloud Storage using a signed URL that expires after 24 hours, because:
✔ Allows access for users without a Google Account.
✔ Ensures secure, time-limited file uploads.
✔ Simpler and more efficient than setting up an App Engine application.
📌 Final Reference:
https://cloud.google.com/storage/docs/access-control/signed-urls
Explanation
Correct Answer:
✅ B. Have users upload images to Cloud Storage using a signed URL that expires after 24 hours.
Explanation:
Since some users do not have a Google Account, you need a temporary, secure, and anonymous method for them to upload images. The best approach is to use Cloud Storage signed URLs, which allow users to upload files without requiring authentication while enforcing time-limited access.
Why Signed URLs?
Allows users without a Google Account to upload files.
Supports expiration time (e.g., 24 hours) for controlled access.
Enforces security via cryptographic signing (HMAC-SHA256).
Users do not need IAM roles or Cloud Identity accounts.
Implementation Steps:
Generate a signed URL for each user using
gsutilorgcloud:- bash
- CopyEditgcloud storage sign-url gs://your-bucket-name --duration=24h
Share the signed URL with users.
Users upload images via the signed URL using
curlor a simple web form.After 24 hours, the URL expires, preventing further uploads.
✅ Supporting Documentation:
Using Signed URLs in Cloud Storage:
https://cloud.google.com/storage/docs/access-control/signed-urls
Why Other Options Are Incorrect?
A. Have users upload images to Cloud Storage and protect the bucket with a password that expires after 24 hours. ❌
Cloud Storage does not support password protection—access is managed via IAM roles or signed URLs.
Requires users to have a Google Account, which some do not.
✅ Reference:
https://cloud.google.com/storage/docs/access-control
C. Create an App Engine web application for users to upload images and configure App Engine to disable the application after 24 hours. Authenticate users via Cloud Identity. ❌
Cloud Identity requires users to have a Google Account, which some do not.
Disabling an App Engine app does not automatically revoke access to uploaded files.
✅ Reference:
https://cloud.google.com/identity/docs/access-control
D. Create an App Engine web application where users can upload images for the next 24 hours. Authenticate users via Cloud Identity. ❌
Requires users to have a Google Account, which some do not.
More complex to implement than a signed URL solution.
✅ Reference:
https://cloud.google.com/identity/docs
Conclusion:
The best approach is B. Have users upload images to Cloud Storage using a signed URL that expires after 24 hours, because:
✔ Allows access for users without a Google Account.
✔ Ensures secure, time-limited file uploads.
✔ Simpler and more efficient than setting up an App Engine application.
📌 Final Reference:
https://cloud.google.com/storage/docs/access-control/signed-urls



