

Professional Cloud Security Engineer - Google Cloud Certified Exam Questions
Total Questions
Last Updated
1st Try Guaranteed

Experts Verified
Question 11 Single Choice
Your team employs a service account to authenticate data transfers from a specific Compute Engine virtual machine instance to a designated Cloud Storage bucket. An engineer mistakenly deletes the service account, resulting in a disruption of application functionality. You aim to restore the application promptly without compromising security. What should you do?
Explanation

Click "Show Answer" to see the explanation here
Choose II – use gcloud iam service-accounts undelete to restore the deleted service account.
A service account that was removed fewer than 30 days ago can be brought back intact with the undelete command.
When it is undeleted, it keeps its original numeric ID, so all previously-granted IAM role bindings and any keys that still exist begin working again immediately, letting the VM resume authenticated transfers without widening access. Google Cloud
Why the other suggestions are not acceptable
I. Temporarily disable authentication on the Cloud Storage bucket
Making the bucket publicly accessible (or otherwise bypassing authentication) fixes the outage but exposes data. Google’s Storage best-practice guidance warns against enabling public readability or writability except when you explicitly want public data; once data is exposed you cannot “take it back.” Google CloudIII. Create a new service account with the same name
Even if the display name is identical, the new account has a different immutable numeric ID and is treated as a separate identity, so it does not inherit any of the roles that were granted to the original account. You would still have to re-grant roles and roll out new credentials. Google CloudIV. Update permissions of another existing service account and hand out those credentials
Re-using a different account just to get the app running breaks the principle of least privilege and complicates auditing. Google’s service-account best-practice guide advises against granting or sharing more-privileged service accounts when it isn’t strictly required. Google Cloud
Explanation
Choose II – use gcloud iam service-accounts undelete to restore the deleted service account.
A service account that was removed fewer than 30 days ago can be brought back intact with the undelete command.
When it is undeleted, it keeps its original numeric ID, so all previously-granted IAM role bindings and any keys that still exist begin working again immediately, letting the VM resume authenticated transfers without widening access. Google Cloud
Why the other suggestions are not acceptable
I. Temporarily disable authentication on the Cloud Storage bucket
Making the bucket publicly accessible (or otherwise bypassing authentication) fixes the outage but exposes data. Google’s Storage best-practice guidance warns against enabling public readability or writability except when you explicitly want public data; once data is exposed you cannot “take it back.” Google CloudIII. Create a new service account with the same name
Even if the display name is identical, the new account has a different immutable numeric ID and is treated as a separate identity, so it does not inherit any of the roles that were granted to the original account. You would still have to re-grant roles and roll out new credentials. Google CloudIV. Update permissions of another existing service account and hand out those credentials
Re-using a different account just to get the app running breaks the principle of least privilege and complicates auditing. Google’s service-account best-practice guide advises against granting or sharing more-privileged service accounts when it isn’t strictly required. Google Cloud
Question 12 Single Choice
You are troubleshooting access denied errors between Compute Engine instances connected to a Shared VPC and BigQuery datasets. The datasets reside in a project protected by a VPC Service Controls perimeter. What should you do?
Explanation

Click "Show Answer" to see the explanation here
Correct option
II. Incorporate the service project that hosts the Compute Engine instances into the same VPC Service Controls perimeter as the BigQuery-dataset project.
VPC Service Controls evaluates every request at the project that owns the calling resource. Because the VM instances belong to the service project, the request to BigQuery originates from a project that is outside the perimeter, so the call is denied.
Adding that service project to the perimeter (or to a new perimeter that is then bridged) makes the request come from an in-perimeter project, allowing access while the perimeter still protects the data.
Official guidance: “Add the project that is making the request to the service perimeter, or create a perimeter bridge that includes it.”
Why the other suggestions are not sufficient
I. Add only the host project that owns the Shared VPC network –
VPC Service Controls enforcement is based on the calling project, not the network-host project. Leaving the service project outside the perimeter still causes BigQuery calls to be blocked.III. Create a “perimeter between” the host and service projects –
A perimeter is a set of projects protecting a Google Cloud service; you don’t create a perimeter “between” two projects. This option doesn’t map to how VPC Service Controls works.IV. Configure a perimeter bridge between the service project and the dataset’s perimeter –
A bridge works only when both projects are already members of (different) perimeters. Because the service project is outside any perimeter, you would first have to place it in a perimeter—effectively the same action as option II, but with extra complexity and no additional benefit here.
Adding the service project to the existing perimeter is the simplest, recommended fix to restore access while maintaining the intended data-exfiltration protections.
Explanation
Correct option
II. Incorporate the service project that hosts the Compute Engine instances into the same VPC Service Controls perimeter as the BigQuery-dataset project.
VPC Service Controls evaluates every request at the project that owns the calling resource. Because the VM instances belong to the service project, the request to BigQuery originates from a project that is outside the perimeter, so the call is denied.
Adding that service project to the perimeter (or to a new perimeter that is then bridged) makes the request come from an in-perimeter project, allowing access while the perimeter still protects the data.
Official guidance: “Add the project that is making the request to the service perimeter, or create a perimeter bridge that includes it.”
Why the other suggestions are not sufficient
I. Add only the host project that owns the Shared VPC network –
VPC Service Controls enforcement is based on the calling project, not the network-host project. Leaving the service project outside the perimeter still causes BigQuery calls to be blocked.III. Create a “perimeter between” the host and service projects –
A perimeter is a set of projects protecting a Google Cloud service; you don’t create a perimeter “between” two projects. This option doesn’t map to how VPC Service Controls works.IV. Configure a perimeter bridge between the service project and the dataset’s perimeter –
A bridge works only when both projects are already members of (different) perimeters. Because the service project is outside any perimeter, you would first have to place it in a perimeter—effectively the same action as option II, but with extra complexity and no additional benefit here.
Adding the service project to the existing perimeter is the simplest, recommended fix to restore access while maintaining the intended data-exfiltration protections.
Question 13 Single Choice
You need to connect your organization's on-premises network with an existing Google Cloud environment that includes one Shared VPC with two subnets named Production and Non-Production. You are required to:
Use a private transport link.
Configure access to Google Cloud APIs through private API endpoints originating from on-premises environments.
Ensure that Google Cloud APIs are only consumed via VPC Service Controls.
What should you do?
Explanation

Click "Show Answer" to see the explanation here
Correct choice
IV. Implement a Dedicated Interconnect link and point on-prem DNS to restricted.googleapis.com
Private transport link: Dedicated Interconnect is a physical, private circuit between your data-center router and Google’s network—no traffic crosses the public Internet Google Cloud.
Private Google API access from on-prem: Private Google Access for on-prem hosts lets you route on-prem traffic for Google APIs through the Interconnect by mapping the special VIP (
199.36.153.4/30) to the DNS namerestricted.googleapis.comGoogle Cloud.VPC Service Controls compliance: Using
restricted.googleapis.comensures that only the Google Cloud services protected by VPC Service Controls are reachable, blocking calls to any non-compliant APIs and preventing data-exfiltration paths Google Cloud.
This single design satisfies every stated requirement: private connectivity, private API endpoints, and enforcement by VPC Service Controls.
Why the other options don’t meet all requirements
I. Cloud VPN +
restricted.googleapis.com
Cloud VPN tunnels ride over the public Internet—they are encrypted but not private transport links, so they violate the first requirement Google Cloud.II. Partner Interconnect +
private.googleapis.com
Although Partner Interconnect provides private transport, theprivate.googleapis.comVIP does not restrict access to VPC Service Controls–compliant services only, leaving an exfiltration gapIII. Direct Peering + private access for the subnets
Direct Peering connects to Google’s edge network outside Google Cloud and is not the recommended method for accessing Google Cloud resources; it also cannot enforce VPC Service Controls without therestrictedVIP, so it fails both guidance and security goals Google Cloud.
Only Dedicated Interconnect with the restricted.googleapis.com VIP cleanly delivers a private, compliant path for on-prem workloads to call Google Cloud APIs while remaining inside the VPC Service Controls perimeter.
Explanation
Correct choice
IV. Implement a Dedicated Interconnect link and point on-prem DNS to restricted.googleapis.com
Private transport link: Dedicated Interconnect is a physical, private circuit between your data-center router and Google’s network—no traffic crosses the public Internet Google Cloud.
Private Google API access from on-prem: Private Google Access for on-prem hosts lets you route on-prem traffic for Google APIs through the Interconnect by mapping the special VIP (
199.36.153.4/30) to the DNS namerestricted.googleapis.comGoogle Cloud.VPC Service Controls compliance: Using
restricted.googleapis.comensures that only the Google Cloud services protected by VPC Service Controls are reachable, blocking calls to any non-compliant APIs and preventing data-exfiltration paths Google Cloud.
This single design satisfies every stated requirement: private connectivity, private API endpoints, and enforcement by VPC Service Controls.
Why the other options don’t meet all requirements
I. Cloud VPN +
restricted.googleapis.com
Cloud VPN tunnels ride over the public Internet—they are encrypted but not private transport links, so they violate the first requirement Google Cloud.II. Partner Interconnect +
private.googleapis.com
Although Partner Interconnect provides private transport, theprivate.googleapis.comVIP does not restrict access to VPC Service Controls–compliant services only, leaving an exfiltration gapIII. Direct Peering + private access for the subnets
Direct Peering connects to Google’s edge network outside Google Cloud and is not the recommended method for accessing Google Cloud resources; it also cannot enforce VPC Service Controls without therestrictedVIP, so it fails both guidance and security goals Google Cloud.
Only Dedicated Interconnect with the restricted.googleapis.com VIP cleanly delivers a private, compliant path for on-prem workloads to call Google Cloud APIs while remaining inside the VPC Service Controls perimeter.
Question 14 Single Choice
As a security engineer at a finance company, your organization intends to store data on Google Cloud. However, your leadership team is apprehensive about the security of their highly sensitive data, particularly regarding internal Google employees' access to your company's data on Google Cloud. What solution should you propose?
Explanation

Click "Show Answer" to see the explanation here
Correct solution – IV. Activate Access Transparency logs together with Access Approval
Access Approval lets you approve or deny every request by Google personnel before they can reach your content.
Access Transparency then records a tamper-evident log of any Google-staff access that you approve (or that is performed for legally required reasons), giving you a permanent audit trail.
Google calls this combination its “strongest customer-visible control” for limiting and monitoring privileged access by Google employees. Google Cloud | Google Cloud
Enabling both features directly addresses leadership’s concern: Google staff can’t look at, or act on, the company’s sensitive data unless you explicitly approve, and every approved action is logged for compliance review.
Why the other options are not sufficient
I. Customer-managed encryption keys (CMEK)
CMEK puts you in charge of the keys that encrypt data at rest, but Google Cloud services still operate the key-usage pipeline. A valid internal support request could use the key and decrypt data without your prior knowledge; CMEK alone doesn’t provide real-time approval or visibility into personnel access. Google Cloud | Google CloudII. IAM policies
Cloud IAM controls what your identities can do. It does not govern or surface the privileged, break-glass access paths used by Google support engineers, so it can’t satisfy the requirement to see or block those accesses. Google CloudIII. Admin Activity logs
Cloud Audit Logs capture API activity made within your project by your principals or by automated Google systems, but they don’t show the fine-grained actions of Google employees working under elevated support workflows. Only Access Transparency logs reveal that information. Google Cloud | Google Cloud
Therefore, enabling Access Transparency plus Access Approval is the only Google-supported method that both prevents unauthorized Google-internal access and documents any approved access—meeting the security expectations for highly sensitive financial data.
Explanation
Correct solution – IV. Activate Access Transparency logs together with Access Approval
Access Approval lets you approve or deny every request by Google personnel before they can reach your content.
Access Transparency then records a tamper-evident log of any Google-staff access that you approve (or that is performed for legally required reasons), giving you a permanent audit trail.
Google calls this combination its “strongest customer-visible control” for limiting and monitoring privileged access by Google employees. Google Cloud | Google Cloud
Enabling both features directly addresses leadership’s concern: Google staff can’t look at, or act on, the company’s sensitive data unless you explicitly approve, and every approved action is logged for compliance review.
Why the other options are not sufficient
I. Customer-managed encryption keys (CMEK)
CMEK puts you in charge of the keys that encrypt data at rest, but Google Cloud services still operate the key-usage pipeline. A valid internal support request could use the key and decrypt data without your prior knowledge; CMEK alone doesn’t provide real-time approval or visibility into personnel access. Google Cloud | Google CloudII. IAM policies
Cloud IAM controls what your identities can do. It does not govern or surface the privileged, break-glass access paths used by Google support engineers, so it can’t satisfy the requirement to see or block those accesses. Google CloudIII. Admin Activity logs
Cloud Audit Logs capture API activity made within your project by your principals or by automated Google systems, but they don’t show the fine-grained actions of Google employees working under elevated support workflows. Only Access Transparency logs reveal that information. Google Cloud | Google Cloud
Therefore, enabling Access Transparency plus Access Approval is the only Google-supported method that both prevents unauthorized Google-internal access and documents any approved access—meeting the security expectations for highly sensitive financial data.
Question 15 Single Choice
To meet compliance reporting requirements, the internal audit department requests a semiannual list of virtual machines (VMs) with available critical operating system (OS) security updates that have not been installed. What action should you take to fulfill this request efficiently?
Explanation

Click "Show Answer" to see the explanation here
Correct choice: IV – “Ensure deployment of the OS Config agent on all VMs and extract the patch-status dashboard every six months.”
Google Cloud’s VM Manager Patch management (part of the OS Config service) keeps an up-to-date compliance database for every Windows and Linux VM.
Once the OS Config agent is enabled, you can open Compute Engine → VM Manager → Patch and export or script the Patch compliance report, which flags each VM that still has critical security updates available. This gives auditors the exact semi-annual list they asked for without having to scan or parse logs yourself. Google Cloud | Google Cloud | Google Cloud
Why the other actions don’t meet the requirement
I. Run a Security Command Center (SCC) scan every six months
SCC’s vulnerability findings can surface OS issues, but they rely on VM Manager underneath and require the Premium tier.
SCC produces findings, not an easy-to-read per-VM “missing patches” list, and is overkill if you only need patch compliance. Google Cloud
II. Use the gcloud CLI to pull OS versions
gcloud compute instances os-inventory list(or similar) tells you the installed OS version, not which critical patches are available but missing. You would have to maintain external CVE/patch catalogs and comparison logic—inefficient and error-prone.
III. Install the Cloud Logging agent and scrape update logs
Log scraping is brittle: log formats differ by OS, rotations purge history, and you still need logic to match “last update date” against vendor patch releases. VM Manager already does this analysis for you; duplicating it adds complexity without extra value.
Because option IV leverages the native Patch compliance dashboard—purpose-built for exactly this reporting need—it is the only efficient and fully supported solution.
Explanation
Correct choice: IV – “Ensure deployment of the OS Config agent on all VMs and extract the patch-status dashboard every six months.”
Google Cloud’s VM Manager Patch management (part of the OS Config service) keeps an up-to-date compliance database for every Windows and Linux VM.
Once the OS Config agent is enabled, you can open Compute Engine → VM Manager → Patch and export or script the Patch compliance report, which flags each VM that still has critical security updates available. This gives auditors the exact semi-annual list they asked for without having to scan or parse logs yourself. Google Cloud | Google Cloud | Google Cloud
Why the other actions don’t meet the requirement
I. Run a Security Command Center (SCC) scan every six months
SCC’s vulnerability findings can surface OS issues, but they rely on VM Manager underneath and require the Premium tier.
SCC produces findings, not an easy-to-read per-VM “missing patches” list, and is overkill if you only need patch compliance. Google Cloud
II. Use the gcloud CLI to pull OS versions
gcloud compute instances os-inventory list(or similar) tells you the installed OS version, not which critical patches are available but missing. You would have to maintain external CVE/patch catalogs and comparison logic—inefficient and error-prone.
III. Install the Cloud Logging agent and scrape update logs
Log scraping is brittle: log formats differ by OS, rotations purge history, and you still need logic to match “last update date” against vendor patch releases. VM Manager already does this analysis for you; duplicating it adds complexity without extra value.
Because option IV leverages the native Patch compliance dashboard—purpose-built for exactly this reporting need—it is the only efficient and fully supported solution.
Question 16 Single Choice
To maintain compliance with CIS Google Cloud Computing Foundations Benchmark v1.3.0 (CIS Google Cloud Foundation 1.3), your organization seeks to continuously evaluate its security posture. However, some controls are irrelevant and should be excluded from evaluation. How can you establish an automated system or process to ensure that only the relevant controls are assessed?
Explanation

Click "Show Answer" to see the explanation here
Correct option – II
Enabling Security Command Center (Premium) and creating a mute rule (also called a mute-config) is the native, automated way to omit findings that aren’t relevant to your CIS Google Cloud Foundation 1.3 scope:
Security Command Center already maps its Security Health Analytics detectors to every control in CIS v1.3, so it will keep evaluating all of the applicable controls by default. Google Cloud
A mute rule is a filter you define once—by detector name, finding category, resource type, label, etc.—that automatically hides current and future findings matching those conditions. They remain stored for audit purposes but are excluded from dashboards, metrics, and compliance counts, ensuring your posture score tracks only the controls that matter. Google Cloud | Google Cloud
Because the rule runs continuously in SCC, no additional manual steps or scripts are needed; as new projects and resources appear, irrelevant controls stay silenced while all others are still enforced and monitored.
Why the other actions don’t meet the requirement
I. Tag findings and mute manually
SCC findings don’t support custom user-defined tags; muting must be done with rule filters. Manual, ad-hoc muting also doesn’t scale to continuous compliance.
III. Export to CSV and filter outside SCC
A one-time CSV export is static—you’d have to regenerate and re-filter every time the environment changes, which is not “continuous evaluation.”
IV. Rely on external auditors
An external report is point-in-time and manual; it cannot provide automated, ongoing assurance inside Google Cloud.
Only option II leverages the built-in, policy-driven mute-rule mechanism that continuously excludes non-applicable CIS controls while leaving all relevant controls under active, automated assessment.
Explanation
Correct option – II
Enabling Security Command Center (Premium) and creating a mute rule (also called a mute-config) is the native, automated way to omit findings that aren’t relevant to your CIS Google Cloud Foundation 1.3 scope:
Security Command Center already maps its Security Health Analytics detectors to every control in CIS v1.3, so it will keep evaluating all of the applicable controls by default. Google Cloud
A mute rule is a filter you define once—by detector name, finding category, resource type, label, etc.—that automatically hides current and future findings matching those conditions. They remain stored for audit purposes but are excluded from dashboards, metrics, and compliance counts, ensuring your posture score tracks only the controls that matter. Google Cloud | Google Cloud
Because the rule runs continuously in SCC, no additional manual steps or scripts are needed; as new projects and resources appear, irrelevant controls stay silenced while all others are still enforced and monitored.
Why the other actions don’t meet the requirement
I. Tag findings and mute manually
SCC findings don’t support custom user-defined tags; muting must be done with rule filters. Manual, ad-hoc muting also doesn’t scale to continuous compliance.
III. Export to CSV and filter outside SCC
A one-time CSV export is static—you’d have to regenerate and re-filter every time the environment changes, which is not “continuous evaluation.”
IV. Rely on external auditors
An external report is point-in-time and manual; it cannot provide automated, ongoing assurance inside Google Cloud.
Only option II leverages the built-in, policy-driven mute-rule mechanism that continuously excludes non-applicable CIS controls while leaving all relevant controls under active, automated assessment.
Question 17 Single Choice
To safeguard against person-in-the-middle attacks, your company seeks to prevent unauthorized access to the Google Cloud environment via counterfeit login pages. Which security measure should be employed?
Explanation

Click "Show Answer" to see the explanation here
Correct Option:
I. Security key
Explanation of the Correct Choice
A hardware security key (for example, a FIDO2-compliant Titan Security Key) is explicitly designed to be phishing-resistant. When you register a security key with Google, it only responds to the exact origin (the exact URL and TLS certificate) against which it was registered. Any counterfeit or look-alike page—even one that perfectly mimics Google’s login UI but is hosted on a different domain or certificate—will be unable to complete the cryptographic handshake. This ensures that an attacker cannot trick the key into signing in from a phishing site or man-in-the-middle page. Google Cloud
Why the Other Options Are Not Suitable
II. Google prompt
The Google prompt (“Try signing in?” push notification) is a convenient 2SV method but is not bound to the TLS origin in the same way a hardware security key is. An attacker who controls a phishing site can relay or proxy your push approval to the real Google endpoint and thereby complete the login. Google SupportIII. Text message or phone call code
SMS-based codes can be intercepted (via SIM-swap attacks, SS7 exploits, or malicious apps) and are vulnerable to phishing relay attacks. They do not cryptographically verify the site you’re signing into, so a fake login page can capture and relay the SMS code in real time. Google SupportIV. Google Authenticator application
Time-based one-time passwords (TOTP) generated by apps like Google Authenticator likewise lack binding to a specific web origin. A phishing site can prompt you for the TOTP code and relay it to Google’s legitimate servers, successfully completing authentication. Google Support
Explanation
Correct Option:
I. Security key
Explanation of the Correct Choice
A hardware security key (for example, a FIDO2-compliant Titan Security Key) is explicitly designed to be phishing-resistant. When you register a security key with Google, it only responds to the exact origin (the exact URL and TLS certificate) against which it was registered. Any counterfeit or look-alike page—even one that perfectly mimics Google’s login UI but is hosted on a different domain or certificate—will be unable to complete the cryptographic handshake. This ensures that an attacker cannot trick the key into signing in from a phishing site or man-in-the-middle page. Google Cloud
Why the Other Options Are Not Suitable
II. Google prompt
The Google prompt (“Try signing in?” push notification) is a convenient 2SV method but is not bound to the TLS origin in the same way a hardware security key is. An attacker who controls a phishing site can relay or proxy your push approval to the real Google endpoint and thereby complete the login. Google SupportIII. Text message or phone call code
SMS-based codes can be intercepted (via SIM-swap attacks, SS7 exploits, or malicious apps) and are vulnerable to phishing relay attacks. They do not cryptographically verify the site you’re signing into, so a fake login page can capture and relay the SMS code in real time. Google SupportIV. Google Authenticator application
Time-based one-time passwords (TOTP) generated by apps like Google Authenticator likewise lack binding to a specific web origin. A phishing site can prompt you for the TOTP code and relay it to Google’s legitimate servers, successfully completing authentication. Google Support
Question 18 Multiple Choice
Your company aims to transition most of its IT infrastructure to Google Cloud. They seek to integrate their existing on-premises Active Directory with Google Cloud for identity management. Which two steps should you take to achieve this integration and configure access management? (Choose two.)
Explanation

Click "Show Answer" to see the explanation here
Correct steps
III. Set up Google Cloud Directory Sync (GCDS) and connect it to Active Directory and Cloud Identity.
GCDS automatically provisions (and de-provisions) the users and groups that already exist in your on-premises AD into Cloud Identity / Google Workspace, giving Google Cloud the same identities you manage on-prem Google Cloud.V. Create Google (IAM) access-groups that mirror each AD group and grant IAM roles to those groups.
Google’s IAM best-practice guidance recommends granting roles to Google groups instead of individual users; group membership—kept in sync by GCDS—then drives access to Google Cloud resources Google Cloud.
Why the other options are not suitable
I. Identity Platform – Identity Platform is for customer-facing (B2C) identities, not for synchronising an enterprise’s workforce accounts with Google Cloud.
II. Cloud Identity SAML integration – SAML provides single-sign-on, but it does not provision users and groups; you would still need GCDS (or SCIM) for directory sync.
IV. Create IAM roles per AD group – Custom roles define sets of permissions; they do not map AD groups to Google Cloud. You still need Google groups (or another principal) to receive those roles, so this adds complexity without solving the identity-sync problem.
Using GCDS for directory synchronisation plus Google (IAM) groups for role assignments gives you a clean, automated bridge from Active Directory to Google Cloud with the least operational overhead and aligns with Google’s recommended enterprise architecture.
Explanation
Correct steps
III. Set up Google Cloud Directory Sync (GCDS) and connect it to Active Directory and Cloud Identity.
GCDS automatically provisions (and de-provisions) the users and groups that already exist in your on-premises AD into Cloud Identity / Google Workspace, giving Google Cloud the same identities you manage on-prem Google Cloud.V. Create Google (IAM) access-groups that mirror each AD group and grant IAM roles to those groups.
Google’s IAM best-practice guidance recommends granting roles to Google groups instead of individual users; group membership—kept in sync by GCDS—then drives access to Google Cloud resources Google Cloud.
Why the other options are not suitable
I. Identity Platform – Identity Platform is for customer-facing (B2C) identities, not for synchronising an enterprise’s workforce accounts with Google Cloud.
II. Cloud Identity SAML integration – SAML provides single-sign-on, but it does not provision users and groups; you would still need GCDS (or SCIM) for directory sync.
IV. Create IAM roles per AD group – Custom roles define sets of permissions; they do not map AD groups to Google Cloud. You still need Google groups (or another principal) to receive those roles, so this adds complexity without solving the identity-sync problem.
Using GCDS for directory synchronisation plus Google (IAM) groups for role assignments gives you a clean, automated bridge from Active Directory to Google Cloud with the least operational overhead and aligns with Google’s recommended enterprise architecture.
Question 19 Single Choice
At AegisSoft Ltd, an application hosted on a Compute Engine VM needs to read data from a Cloud Storage bucket. For security compliance, the IT team prohibits public access to storage buckets and follows the principle of least privilege.
Which solution best satisfies the security requirements?
Explanation

Click "Show Answer" to see the explanation here
✅ Correct Answer: C
Assign a service account with read-only access to the Cloud Storage bucket and allow the application to use credentials provided via instance metadata.
✅ Why Option C is Correct:
This option follows Google Cloud's best practices for secure access:
Assigning IAM roles to a Compute Engine VM's service account enforces least privilege.
Applications running on the VM can access Cloud Storage without embedding credentials, by retrieving Application Default Credentials (ADC) via instance metadata.
This is secure, avoids managing static credentials, and honors the organization's policy of prohibiting public access.
The metadata server automatically supplies the necessary access tokens when the service account has the correct IAM role.
🔒 Security Compliance:
This model keeps buckets private, uses IAM roles, and respects the principle of least privilege.
The application does not store any credentials explicitly, reducing the risk of leakage.
Official Documentation References:
“Applications running on the VM can use the Compute Engine service account and Application Default Credentials to authenticate API requests.”
❌ Option A: Configure a Cloud Storage ACL to allow read-only access from the VM's external IP address
Incorrect for multiple reasons:
IP-based ACLs are insecure, brittle, and not scalable.
GCS does not natively support access control based solely on IP addresses.
Public access is prohibited by the organization’s security policy.
Documentation Reference:
❌ Option B: Assign a service account with read-only permissions to the bucket and store its credentials inside the application configuration on the VM
This violates security best practices:
Storing service account keys on the VM increases the risk of key leakage.
GCP recommends avoiding service account keys in favor of workload identity via metadata server.
Documentation Reference:
“Avoid using service account keys when possible.”
❌ Option D: Use Cloud KMS to encrypt the data in the bucket and configure the application to decrypt the content using the KMS key
While KMS can enhance data security, this option:
Does not grant access to the bucket—it only adds encryption control.
Doesn’t satisfy the core requirement: secure access with least privilege.
Also, KMS is not a substitute for authentication and authorization controls via IAM.
Documentation Reference:
✅ Final Answer:
C. Assign a service account with read-only access to the Cloud Storage bucket and allow the application to use credentials provided via instance metadata.
This option is secure, compliant, and aligned with GCP best practices for Compute Engine access to Cloud Storage.
Reference:
https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa
Explanation
✅ Correct Answer: C
Assign a service account with read-only access to the Cloud Storage bucket and allow the application to use credentials provided via instance metadata.
✅ Why Option C is Correct:
This option follows Google Cloud's best practices for secure access:
Assigning IAM roles to a Compute Engine VM's service account enforces least privilege.
Applications running on the VM can access Cloud Storage without embedding credentials, by retrieving Application Default Credentials (ADC) via instance metadata.
This is secure, avoids managing static credentials, and honors the organization's policy of prohibiting public access.
The metadata server automatically supplies the necessary access tokens when the service account has the correct IAM role.
🔒 Security Compliance:
This model keeps buckets private, uses IAM roles, and respects the principle of least privilege.
The application does not store any credentials explicitly, reducing the risk of leakage.
Official Documentation References:
“Applications running on the VM can use the Compute Engine service account and Application Default Credentials to authenticate API requests.”
❌ Option A: Configure a Cloud Storage ACL to allow read-only access from the VM's external IP address
Incorrect for multiple reasons:
IP-based ACLs are insecure, brittle, and not scalable.
GCS does not natively support access control based solely on IP addresses.
Public access is prohibited by the organization’s security policy.
Documentation Reference:
❌ Option B: Assign a service account with read-only permissions to the bucket and store its credentials inside the application configuration on the VM
This violates security best practices:
Storing service account keys on the VM increases the risk of key leakage.
GCP recommends avoiding service account keys in favor of workload identity via metadata server.
Documentation Reference:
“Avoid using service account keys when possible.”
❌ Option D: Use Cloud KMS to encrypt the data in the bucket and configure the application to decrypt the content using the KMS key
While KMS can enhance data security, this option:
Does not grant access to the bucket—it only adds encryption control.
Doesn’t satisfy the core requirement: secure access with least privilege.
Also, KMS is not a substitute for authentication and authorization controls via IAM.
Documentation Reference:
✅ Final Answer:
C. Assign a service account with read-only access to the Cloud Storage bucket and allow the application to use credentials provided via instance metadata.
This option is secure, compliant, and aligned with GCP best practices for Compute Engine access to Cloud Storage.
Reference:
https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa
Question 20 Single Choice
You are exporting application logs to Cloud Storage. You encounter an error message stating that the log sinks don't support uniform bucket-level access policies. How should you resolve this error?
Explanation

Click "Show Answer" to see the explanation here
Correct option — I. Modify the access-control model for the bucket (switch it from Uniform bucket-level access to Fine-grained access).
Cloud Logging has to add an object-level ACL entry for the sink’s writer service account each time it writes a log file to Cloud Storage.
If the destination bucket was created with Uniform bucket-level access enabled, all ACLs are disabled, so Logging can’t grant itself the required permission and the sink creation or write fails with the error “log sinks don’t support uniform bucket-level access policies.”
The Logging troubleshooting guide states that, to fix this specific error, you must change the bucket’s access-control model within the first 90 days after creation, or recreate the bucket with Fine-grained access Google Cloud.
The Uniform-bucket-level-access page confirms that enabling the feature “disables ACLs for all Cloud Storage resources in the bucket” Google Cloud—exactly what prevents the sink from adding its ACL.
Why the other actions don’t solve the problem
II. Adjust your sink with the appropriate bucket destination
The destination itself isn’t wrong—the bucket’s access-control mode is. Pointing the sink back at the same bucket, or at any other bucket that still has Uniform access enabled, re-creates the error. Nothing in the sink definition overrides the bucket’s ACL settings.III. Grant
roles/logging.logWriterto the bucketroles/logging.logWriteris an IAM role for writing log entries into Cloud Logging, not for writing objects to Cloud Storage buckets. It doesn’t grant the Storage permissions the sink needs, nor does it re-enable ACLs, so the sink remains blocked. Google CloudIV. Grant
roles/logging.bucketWriterto the bucketroles/logging.bucketWriterlets a service account write to a Log Bucket (an internal Logging resource). It has no effect on Cloud Storage buckets, and it likewise can’t bypass Uniform bucket-level access. Google Cloud
In short, only changing the bucket’s access-control model to Fine-grained (or recreating the bucket that way) removes the uniform-access restriction and lets Cloud Logging sink objects be written successfully—resolving the error without loosening any other permissions or roles.
Explanation
Correct option — I. Modify the access-control model for the bucket (switch it from Uniform bucket-level access to Fine-grained access).
Cloud Logging has to add an object-level ACL entry for the sink’s writer service account each time it writes a log file to Cloud Storage.
If the destination bucket was created with Uniform bucket-level access enabled, all ACLs are disabled, so Logging can’t grant itself the required permission and the sink creation or write fails with the error “log sinks don’t support uniform bucket-level access policies.”
The Logging troubleshooting guide states that, to fix this specific error, you must change the bucket’s access-control model within the first 90 days after creation, or recreate the bucket with Fine-grained access Google Cloud.
The Uniform-bucket-level-access page confirms that enabling the feature “disables ACLs for all Cloud Storage resources in the bucket” Google Cloud—exactly what prevents the sink from adding its ACL.
Why the other actions don’t solve the problem
II. Adjust your sink with the appropriate bucket destination
The destination itself isn’t wrong—the bucket’s access-control mode is. Pointing the sink back at the same bucket, or at any other bucket that still has Uniform access enabled, re-creates the error. Nothing in the sink definition overrides the bucket’s ACL settings.III. Grant
roles/logging.logWriterto the bucketroles/logging.logWriteris an IAM role for writing log entries into Cloud Logging, not for writing objects to Cloud Storage buckets. It doesn’t grant the Storage permissions the sink needs, nor does it re-enable ACLs, so the sink remains blocked. Google CloudIV. Grant
roles/logging.bucketWriterto the bucketroles/logging.bucketWriterlets a service account write to a Log Bucket (an internal Logging resource). It has no effect on Cloud Storage buckets, and it likewise can’t bypass Uniform bucket-level access. Google Cloud
In short, only changing the bucket’s access-control model to Fine-grained (or recreating the bucket that way) removes the uniform-access restriction and lets Cloud Logging sink objects be written successfully—resolving the error without loosening any other permissions or roles.



