

Professional Cloud Security Engineer - Google Cloud Certified Exam Questions
Total Questions
Last Updated
1st Try Guaranteed

Experts Verified
Question 1 Single Choice
A customer’s company is divided into multiple business units, each with its own independent engineering team. Your team needs visibility into all projects created within the company and wants to categorize Google Cloud Platform (GCP) projects by business unit. Additionally, each business unit requires distinct IAM permissions.
What approach should you take to meet these requirements?
Explanation

Click "Show Answer" to see the explanation here
✅ Correct Answer: A. Set up an organization node and create separate folders for each business unit.
✅ Explanation:
To meet the following requirements:
Visibility into all projects created within the company,
Categorization of projects by business unit, and
Separate IAM permissions per business unit,
The most scalable and compliant approach is to:
Set up a Google Cloud Organization Node and use folders to structure resources based on business units.
Key benefits:
Folders allow grouping projects logically by business unit.
IAM policies can be set at the folder level, so each business unit can have distinct access controls.
You get centralized visibility and control under the Organization Node.
Projects inherit policies from folders and the organization, enabling scalable governance.
Official Documentation:
Creating and Managing Folders
Best practices for enterprise organizations
“Use folders to group projects by business unit, department, or team, and apply IAM policies at the folder level for access control.”
❌ Why Other Options Are Incorrect:
B. Create independent projects for each business unit, using gmail.com accounts
This lacks centralized control and visibility.
Projects tied to gmail.com accounts are not part of a Cloud Organization, so IAM governance is limited.
Not suitable for enterprise or regulated environments.
C. Allocate GCP resources within a project and use labels to denote the owning business unit
Labels are useful for cost tracking and resource filtering, but they do not support IAM or hierarchical structuring.
You cannot enforce policies or access control using labels alone.
D. Assign GCP resources within a VPC for each business unit to ensure separate network access
VPC segmentation can help isolate network traffic, but it does not provide project-level visibility or IAM separation.
Network boundaries ≠ administrative boundaries.
Also, multiple business units in the same project still share IAM settings.
✅ Final Answer:
A. Set up an organization node and create separate folders for each business unit.
This is the recommended enterprise approach for managing GCP at scale, with clear visibility, IAM control, and project organization.
Explanation
✅ Correct Answer: A. Set up an organization node and create separate folders for each business unit.
✅ Explanation:
To meet the following requirements:
Visibility into all projects created within the company,
Categorization of projects by business unit, and
Separate IAM permissions per business unit,
The most scalable and compliant approach is to:
Set up a Google Cloud Organization Node and use folders to structure resources based on business units.
Key benefits:
Folders allow grouping projects logically by business unit.
IAM policies can be set at the folder level, so each business unit can have distinct access controls.
You get centralized visibility and control under the Organization Node.
Projects inherit policies from folders and the organization, enabling scalable governance.
Official Documentation:
Creating and Managing Folders
Best practices for enterprise organizations
“Use folders to group projects by business unit, department, or team, and apply IAM policies at the folder level for access control.”
❌ Why Other Options Are Incorrect:
B. Create independent projects for each business unit, using gmail.com accounts
This lacks centralized control and visibility.
Projects tied to gmail.com accounts are not part of a Cloud Organization, so IAM governance is limited.
Not suitable for enterprise or regulated environments.
C. Allocate GCP resources within a project and use labels to denote the owning business unit
Labels are useful for cost tracking and resource filtering, but they do not support IAM or hierarchical structuring.
You cannot enforce policies or access control using labels alone.
D. Assign GCP resources within a VPC for each business unit to ensure separate network access
VPC segmentation can help isolate network traffic, but it does not provide project-level visibility or IAM separation.
Network boundaries ≠ administrative boundaries.
Also, multiple business units in the same project still share IAM settings.
✅ Final Answer:
A. Set up an organization node and create separate folders for each business unit.
This is the recommended enterprise approach for managing GCP at scale, with clear visibility, IAM control, and project organization.
Question 2 Single Choice
You are part of your company's development team and have observed that your web application, hosted in staging on GKE, dynamically incorporates user data into web pages without validating the input first. This flaw could allow attackers to execute arbitrary commands and display malicious content in a user's browser in the production environment.
What should you do to prevent and resolve this vulnerability?
Explanation

Click "Show Answer" to see the explanation here
Correct option
D. Employ Web Security Scanner in staging to simulate an XSS injection attack, and then adopt a templating system that supports contextual auto-escaping.
Web Security Scanner is designed to crawl your application in a test (staging) environment and automatically attempt common attacks such as reflected, stored, and DOM-based cross-site scripting (XSS). If the scanner finds an XSS issue, the official remediation guidance is to “escape all output and use a templating system that supports contextual auto-escaping.” Google Cloud | Google Cloud
Running the scan before promotion to production lets you verify the fix and keep vulnerable code from ever reaching users, satisfying both prevent and resolve requirements.
Why the other options are not suitable:
A. Implement Cloud IAP based on IP address or device attributes
Cloud Identity-Aware Proxy restricts who can reach your app; it does not sanitize or validate user-supplied input, so any authenticated user could still exploit an XSS flaw. Google CloudB. Set up an HTTPS load balancer and use Cloud Armor in production
Cloud Armor’s WAF rules can help mitigate some XSS payloads at the edge, but Google positions Cloud Armor as a defence-in-depth layer, not a substitute for fixing vulnerable code. The underlying template/output problem would remain and could still be abused if the WAF rules are bypassed or disabled. Google Cloud | Google CloudC. Use Web Security Scanner only to check for outdated libraries and replace them
Out-of-date libraries are a different class of finding; simply upgrading libraries won’t correct the core vulnerability (unsanitised dynamic HTML). Google Cloud
Only option D directly detects the XSS vulnerability in a safe environment and applies the Google-recommended remediation (contextual auto-escaping), eliminating the flaw instead of merely restricting access or attempting to block exploits at the perimeter.
Explanation
Correct option
D. Employ Web Security Scanner in staging to simulate an XSS injection attack, and then adopt a templating system that supports contextual auto-escaping.
Web Security Scanner is designed to crawl your application in a test (staging) environment and automatically attempt common attacks such as reflected, stored, and DOM-based cross-site scripting (XSS). If the scanner finds an XSS issue, the official remediation guidance is to “escape all output and use a templating system that supports contextual auto-escaping.” Google Cloud | Google Cloud
Running the scan before promotion to production lets you verify the fix and keep vulnerable code from ever reaching users, satisfying both prevent and resolve requirements.
Why the other options are not suitable:
A. Implement Cloud IAP based on IP address or device attributes
Cloud Identity-Aware Proxy restricts who can reach your app; it does not sanitize or validate user-supplied input, so any authenticated user could still exploit an XSS flaw. Google CloudB. Set up an HTTPS load balancer and use Cloud Armor in production
Cloud Armor’s WAF rules can help mitigate some XSS payloads at the edge, but Google positions Cloud Armor as a defence-in-depth layer, not a substitute for fixing vulnerable code. The underlying template/output problem would remain and could still be abused if the WAF rules are bypassed or disabled. Google Cloud | Google CloudC. Use Web Security Scanner only to check for outdated libraries and replace them
Out-of-date libraries are a different class of finding; simply upgrading libraries won’t correct the core vulnerability (unsanitised dynamic HTML). Google Cloud
Only option D directly detects the XSS vulnerability in a safe environment and applies the Google-recommended remediation (contextual auto-escaping), eliminating the flaw instead of merely restricting access or attempting to block exploits at the perimeter.
Question 3 Single Choice
You work for an organization in a regulated industry with stringent data protection requirements. The organization backs up their data in the cloud. To comply with data privacy regulations, this data can only be stored for a specific length of time and must be deleted after this period. You aim to automate compliance with this regulation while minimizing storage costs. What should you do?
Explanation

Click "Show Answer" to see the explanation here
Correct option — IV. Store the data in a Cloud Storage bucket and configure Object Lifecycle Management (OLM).
Automated retention & deletion: OLM lets you add a Delete action that automatically removes objects after they reach a specified age or date, so the bucket enforces the exact retention window without manual intervention. Google Cloud
Cost optimisation: In the same lifecycle rule you can also add SetStorageClass actions to move objects through Nearline, Coldline or Archive classes before deletion, ensuring the lowest possible storage cost over the data’s lifetime while still satisfying compliance. Google Cloud
Fit for backup data: Cloud Storage is designed for durable, long-term backups; you pay only for the space used, and you avoid the fixed node or capacity charges of database-style services.
Why the other options are not suitable
I. Persistent Disk + manual delete – Persistent disks are meant for live VM workloads; they cost far more per GB than object storage and provide no built-in lifecycle automation. You would need external scheduling to delete the disk, increasing operational risk and expense. Google Cloud
II. Cloud Bigtable TTL – Bigtable’s garbage-collection policies can delete cells older than a TTL, but the service is optimised for low-latency reads and requires a minimum node footprint, making it costly for passive backups. Google Cloud
III. BigQuery table expiration – You can set a table or dataset expiration to auto-delete data, but BigQuery is priced for interactive analytics, not bulk backup storage, so cost per TB is considerably higher than Coldline or Archive object storage. Google Cloud
Only Cloud Storage with Object Lifecycle Management simultaneously automates time-based deletion and minimises storage spend, fully meeting the regulatory and financial objectives.
Explanation
Correct option — IV. Store the data in a Cloud Storage bucket and configure Object Lifecycle Management (OLM).
Automated retention & deletion: OLM lets you add a Delete action that automatically removes objects after they reach a specified age or date, so the bucket enforces the exact retention window without manual intervention. Google Cloud
Cost optimisation: In the same lifecycle rule you can also add SetStorageClass actions to move objects through Nearline, Coldline or Archive classes before deletion, ensuring the lowest possible storage cost over the data’s lifetime while still satisfying compliance. Google Cloud
Fit for backup data: Cloud Storage is designed for durable, long-term backups; you pay only for the space used, and you avoid the fixed node or capacity charges of database-style services.
Why the other options are not suitable
I. Persistent Disk + manual delete – Persistent disks are meant for live VM workloads; they cost far more per GB than object storage and provide no built-in lifecycle automation. You would need external scheduling to delete the disk, increasing operational risk and expense. Google Cloud
II. Cloud Bigtable TTL – Bigtable’s garbage-collection policies can delete cells older than a TTL, but the service is optimised for low-latency reads and requires a minimum node footprint, making it costly for passive backups. Google Cloud
III. BigQuery table expiration – You can set a table or dataset expiration to auto-delete data, but BigQuery is priced for interactive analytics, not bulk backup storage, so cost per TB is considerably higher than Coldline or Archive object storage. Google Cloud
Only Cloud Storage with Object Lifecycle Management simultaneously automates time-based deletion and minimises storage spend, fully meeting the regulatory and financial objectives.
Question 4 Single Choice
At NetCore Labs, you are setting up a new Service Account that needs the ability to list all Compute Engine VM instances within a Google Cloud project. You also want to follow Google's best practices for access control.
What is the appropriate approach?
Explanation

Click "Show Answer" to see the explanation here
✅ Correct Answer: B. Define a custom IAM role that includes the compute.instances.list permission and assign that role to the Service Account.
✅ Explanation:
At NetCore Labs, the goal is to:
Allow a Service Account to list all Compute Engine VM instances.
Follow Google's best practices for principle of least privilege—granting only the permissions necessary, and nothing more.
The most appropriate way to meet these requirements is to:
Create a custom IAM role that includes only the compute.instances.list permission and assign that role to the Service Account.
This ensures:
Minimal access: Only listing VMs, no ability to read, start, stop, or modify.
Auditability and control over what the service account can do.
Official Documentation:
Custom roles
compute.instances.list permission
❌ Why Other Options Are Incorrect:
A. Create an Instance Template and assign the Service Account Read Only access through Compute Engine access scopes
Access scopes are only applicable to VM instances, not to service accounts used outside of VMs.
Also, access scopes are deprecated in favor of IAM roles.
Instance Templates are not relevant for granting service account permissions at the project level.
C. Grant the Service Account the predefined Compute Viewer role and assign it to the instances
roles/compute.viewerincludes more permissions than just listing instances, such as viewing disks, snapshots, etc.Assigning it to instances is incorrect—you assign IAM roles at the project or resource level, not directly to VM instances.
This violates least privilege principles.
D. Assign the Project Viewer role to the Service Account and use it across all instances
roles/vieweris a broad role, granting read access to all resources in the project (not just Compute Engine).This is overly permissive, violating least privilege.
Not suitable for narrowly scoped access.
✅ Final Answer:
B. Define a custom IAM role that includes the compute.instances.list permission and assign that role to the Service Account.
This aligns with Google's IAM best practices by providing minimal, role-specific access.
References:
Explanation
✅ Correct Answer: B. Define a custom IAM role that includes the compute.instances.list permission and assign that role to the Service Account.
✅ Explanation:
At NetCore Labs, the goal is to:
Allow a Service Account to list all Compute Engine VM instances.
Follow Google's best practices for principle of least privilege—granting only the permissions necessary, and nothing more.
The most appropriate way to meet these requirements is to:
Create a custom IAM role that includes only the compute.instances.list permission and assign that role to the Service Account.
This ensures:
Minimal access: Only listing VMs, no ability to read, start, stop, or modify.
Auditability and control over what the service account can do.
Official Documentation:
Custom roles
compute.instances.list permission
❌ Why Other Options Are Incorrect:
A. Create an Instance Template and assign the Service Account Read Only access through Compute Engine access scopes
Access scopes are only applicable to VM instances, not to service accounts used outside of VMs.
Also, access scopes are deprecated in favor of IAM roles.
Instance Templates are not relevant for granting service account permissions at the project level.
C. Grant the Service Account the predefined Compute Viewer role and assign it to the instances
roles/compute.viewerincludes more permissions than just listing instances, such as viewing disks, snapshots, etc.Assigning it to instances is incorrect—you assign IAM roles at the project or resource level, not directly to VM instances.
This violates least privilege principles.
D. Assign the Project Viewer role to the Service Account and use it across all instances
roles/vieweris a broad role, granting read access to all resources in the project (not just Compute Engine).This is overly permissive, violating least privilege.
Not suitable for narrowly scoped access.
✅ Final Answer:
B. Define a custom IAM role that includes the compute.instances.list permission and assign that role to the Service Account.
This aligns with Google's IAM best practices by providing minimal, role-specific access.
References:
Question 5 Single Choice
Your organization's record data is stored in Cloud Storage, and it must be retained for a minimum of seven years, with this policy being permanent. How should you proceed to ensure compliance with this requirement?
Explanation

Click "Show Answer" to see the explanation here
Correct action – IV
Identify the buckets containing record data.
Apply a retention policy to each bucket with a 7-year period.
Enable Bucket Lock on those buckets to make the retention policy permanent and unchangeable (it can only be increased, never removed or shortened).
Locking the retention policy guarantees that no one—neither users nor even Google—can delete objects or alter the retention settings until each object has met its 7-year requirement. Google Cloud | Google Cloud
Why the other options don’t suffice
I. Monitoring and alerts alone do not prevent someone from shortening or removing the retention policy. Without Bucket Lock, a malicious or mistaken change could violate your 7-year mandate despite alerts.
II. Removing IAM roles to block policy updates is fragile and error-prone—you risk accidentally stripping away needed permissions, and it still doesn’t stop an Organization Owner or Administrator from re-granting or bypassing those roles.
III. “Implement a bucket policy” is vague—Cloud Storage does not offer a separate “bucket policy” construct for retention. Only the built-in retention policy plus Bucket Lock provides an enforceable, irreversible WORM guarantee.
Option IV is the only method that both enforces the 7-year retention and makes it permanently immutable.
Explanation
Correct action – IV
Identify the buckets containing record data.
Apply a retention policy to each bucket with a 7-year period.
Enable Bucket Lock on those buckets to make the retention policy permanent and unchangeable (it can only be increased, never removed or shortened).
Locking the retention policy guarantees that no one—neither users nor even Google—can delete objects or alter the retention settings until each object has met its 7-year requirement. Google Cloud | Google Cloud
Why the other options don’t suffice
I. Monitoring and alerts alone do not prevent someone from shortening or removing the retention policy. Without Bucket Lock, a malicious or mistaken change could violate your 7-year mandate despite alerts.
II. Removing IAM roles to block policy updates is fragile and error-prone—you risk accidentally stripping away needed permissions, and it still doesn’t stop an Organization Owner or Administrator from re-granting or bypassing those roles.
III. “Implement a bucket policy” is vague—Cloud Storage does not offer a separate “bucket policy” construct for retention. Only the built-in retention policy plus Bucket Lock provides an enforceable, irreversible WORM guarantee.
Option IV is the only method that both enforces the 7-year retention and makes it permanently immutable.
Question 6 Single Choice
You need to establish a Cloud Interconnect connection between your company’s on-premises data center and VPC host network. Your goal is to ensure that on-premises applications can only access Google APIs over the Cloud Interconnect and not through the public internet. You are required to use only APIs supported by VPC Service Controls to mitigate against exfiltration risk to non-supported APIs. How should you configure the network?
Explanation

Click "Show Answer" to see the explanation here
Correct choice: IV
Using restricted.googleapis.com and advertising the 199.36.153.4/30 VIP (restricted Google APIs) over your Cloud Interconnect satisfies both security goals:
Traffic path – the /30 VIP is only reachable inside Google’s network, so packets from on-prem never leave the private Cloud Interconnect path.
Service restriction –
restricted.googleapis.comexposes only the APIs that are on the VPC Service Controls allow-list, blocking all others and removing the exfiltration risk. Google Cloud
Once the VIP is being advertised by Cloud Router, create private DNS records so that any *.googleapis.com lookup returns the restricted VIP (usually by CNAMEing *.googleapis.com → restricted.googleapis.com with A records pointing to 199.36.153.4-7). Google Cloud
Why the other statements don’t work
I. Enable Private Google Access on subnets & use global dynamic routing
Private Google Access on subnets affects Google Cloud VMs, not on-prem hosts.
It ordinarily resolves to
private.googleapis.com, which permits all Google APIs, including those not protected by VPC Service Controls. Google CloudResult: on-prem apps could still call non-supported APIs.
II. CNAME
*.googleapis.comtorestricted.googleapis.com, A-records 199.36.153.8/30The 199.36.153.8/30 block belongs to
private.googleapis.com, not the restricted VIP. Google CloudUsing the wrong address range breaks connectivity and, even if reachable, would again allow access to non-restricted APIs.
III. Use
private.googleapis.comAs documented, this VIP exposes most Google APIs and specifically does not limit traffic to the VPC Service Controls list, so it fails the exfiltration requirement. Google Cloud
Explanation
Correct choice: IV
Using restricted.googleapis.com and advertising the 199.36.153.4/30 VIP (restricted Google APIs) over your Cloud Interconnect satisfies both security goals:
Traffic path – the /30 VIP is only reachable inside Google’s network, so packets from on-prem never leave the private Cloud Interconnect path.
Service restriction –
restricted.googleapis.comexposes only the APIs that are on the VPC Service Controls allow-list, blocking all others and removing the exfiltration risk. Google Cloud
Once the VIP is being advertised by Cloud Router, create private DNS records so that any *.googleapis.com lookup returns the restricted VIP (usually by CNAMEing *.googleapis.com → restricted.googleapis.com with A records pointing to 199.36.153.4-7). Google Cloud
Why the other statements don’t work
I. Enable Private Google Access on subnets & use global dynamic routing
Private Google Access on subnets affects Google Cloud VMs, not on-prem hosts.
It ordinarily resolves to
private.googleapis.com, which permits all Google APIs, including those not protected by VPC Service Controls. Google CloudResult: on-prem apps could still call non-supported APIs.
II. CNAME
*.googleapis.comtorestricted.googleapis.com, A-records 199.36.153.8/30The 199.36.153.8/30 block belongs to
private.googleapis.com, not the restricted VIP. Google CloudUsing the wrong address range breaks connectivity and, even if reachable, would again allow access to non-restricted APIs.
III. Use
private.googleapis.comAs documented, this VIP exposes most Google APIs and specifically does not limit traffic to the VPC Service Controls list, so it fails the exfiltration requirement. Google Cloud
Question 7 Multiple Choice
Applications frequently need to use `secrets`—small bits of sensitive information—during either the build or run phases. An administrator handling these secrets on GCP aims to monitor `who performed what action, in which location, and at what time` within their GCP projects.
Which two logging streams would give the administrator the desired information? (Select two.)
Explanation

Click "Show Answer" to see the explanation here
✅ Correct Answers: A. Admin Activity logs and C. Data Access logs
✅ Explanation:
To answer “who performed what action, in which location, and at what time”—particularly for sensitive operations like accessing secrets—the administrator needs visibility into both administrative actions and data access activities.
Google Cloud provides the following types of Cloud Audit Logs:
Admin Activity logs – Track who did what administrative action (e.g., creating/deleting a secret).
Data Access logs – Track who accessed the contents of a resource, such as reading a secret value.
✅ A. Admin Activity logs
What it captures: All admin-level operations performed via the GCP Console, SDK, or API (e.g., creating, deleting, updating secrets or IAM policies).
These logs are enabled by default and cannot be disabled.
Relevant for tracking configuration changes to Secret Manager or IAM bindings.
Official Doc:
Cloud Audit Logs – Admin Activity
✅ C. Data Access logs
What it captures: Logs access to the contents of resources, like reading the actual secret data from Secret Manager.
This is not enabled by default (except for BigQuery), and needs to be explicitly enabled.
It shows who accessed sensitive information, what they accessed, and when.
Official Doc:
Cloud Audit Logs – Data Access
❌ Why Other Options Are Incorrect:
B. System Event logs
These record Google system actions (e.g., actions performed by Google services on your behalf, like VM restarts).
They do not include user-initiated access or secret interactions.
Not relevant for auditing who accessed or modified secrets.
Reference:
System Event logs
D. VPC Flow logs
VPC Flow Logs record network-level data (e.g., IP traffic flow between VMs).
They provide no information on user identity or resource access, so they are irrelevant for auditing secrets.
Reference:
VPC Flow Logs
✅ Final Answer:
A. Admin Activity logs
C. Data Access logs
These two log types provide the necessary visibility into who accessed or modified secrets, when, and where, in accordance with best practices for auditing sensitive operations on Google Cloud.
Explanation
✅ Correct Answers: A. Admin Activity logs and C. Data Access logs
✅ Explanation:
To answer “who performed what action, in which location, and at what time”—particularly for sensitive operations like accessing secrets—the administrator needs visibility into both administrative actions and data access activities.
Google Cloud provides the following types of Cloud Audit Logs:
Admin Activity logs – Track who did what administrative action (e.g., creating/deleting a secret).
Data Access logs – Track who accessed the contents of a resource, such as reading a secret value.
✅ A. Admin Activity logs
What it captures: All admin-level operations performed via the GCP Console, SDK, or API (e.g., creating, deleting, updating secrets or IAM policies).
These logs are enabled by default and cannot be disabled.
Relevant for tracking configuration changes to Secret Manager or IAM bindings.
Official Doc:
Cloud Audit Logs – Admin Activity
✅ C. Data Access logs
What it captures: Logs access to the contents of resources, like reading the actual secret data from Secret Manager.
This is not enabled by default (except for BigQuery), and needs to be explicitly enabled.
It shows who accessed sensitive information, what they accessed, and when.
Official Doc:
Cloud Audit Logs – Data Access
❌ Why Other Options Are Incorrect:
B. System Event logs
These record Google system actions (e.g., actions performed by Google services on your behalf, like VM restarts).
They do not include user-initiated access or secret interactions.
Not relevant for auditing who accessed or modified secrets.
Reference:
System Event logs
D. VPC Flow logs
VPC Flow Logs record network-level data (e.g., IP traffic flow between VMs).
They provide no information on user identity or resource access, so they are irrelevant for auditing secrets.
Reference:
VPC Flow Logs
✅ Final Answer:
A. Admin Activity logs
C. Data Access logs
These two log types provide the necessary visibility into who accessed or modified secrets, when, and where, in accordance with best practices for auditing sensitive operations on Google Cloud.
Question 8 Single Choice
How can you ensure the trustworthiness and alignment with security requirements of operating system images used across projects during the migration of virtual machines (VMs) to Google Cloud?
Explanation

Click "Show Answer" to see the explanation here
Correct choice – I. Enforce an organization-policy constraint that lets VMs boot only from images in your approved “trusted-image” project
Set the list-constraint
constraints/compute.trustedImageProjectsat the organization (or folder/project) level.When the constraint is enforced, any attempt to create a boot disk from an image outside the allowed project(s) fails, so every VM across every project inherits the same hardened, security-reviewed image catalog.
Google’s documentation calls this the Trusted image policy: “Use the Trusted image feature to define an organization policy that allows principals to create persistent disks only from images in specific projects.” Google Cloud
You can manage the policy in the console under Organization policies → Define trusted image projects or with the Cloud CLI, making it straightforward to prove compliance during the migration as well as after.
Why the other approaches don’t meet the requirement
II. Require Shielded VMs everywhere
The boolean constraint
constraints/compute.requireShieldedVmdoes harden the boot process (Secure Boot, vTPM, integrity monitoring) but does not restrict which image project a VM can use. A user could still select an unapproved public or custom image, defeating the “trusted image catalogue” requirement. Google Cloud
III. Trigger a Cloud Function when a VM is created and check whether its image is current/not deprecated
This is reactive rather than preventive—you discover non-compliant images after the VM exists.
Even for approved images, keeping a function up-to-date with deprecation states (see the “Deprecate a custom image” workflow) adds operational overhead with no guarantee that every edge case is caught. Google Cloud
IV. Run a vulnerability scanner against the image repository
Artifact Analysis and other scanners are excellent for container images, but they don’t stop someone from instantiating a VM from a different, unscanned project. Scanning alone offers no enforcement mechanism; coupling it with the trusted-image org policy is a best practice, but by itself it doesn’t satisfy the “use only approved images” control. Google Cloud
Because option I enforces the policy before any VM is created and is purpose-built for exactly this use case, it’s the only action that efficiently guarantees all VMs migrate with approved, security-compliant operating-system images.
Explanation
Correct choice – I. Enforce an organization-policy constraint that lets VMs boot only from images in your approved “trusted-image” project
Set the list-constraint
constraints/compute.trustedImageProjectsat the organization (or folder/project) level.When the constraint is enforced, any attempt to create a boot disk from an image outside the allowed project(s) fails, so every VM across every project inherits the same hardened, security-reviewed image catalog.
Google’s documentation calls this the Trusted image policy: “Use the Trusted image feature to define an organization policy that allows principals to create persistent disks only from images in specific projects.” Google Cloud
You can manage the policy in the console under Organization policies → Define trusted image projects or with the Cloud CLI, making it straightforward to prove compliance during the migration as well as after.
Why the other approaches don’t meet the requirement
II. Require Shielded VMs everywhere
The boolean constraint
constraints/compute.requireShieldedVmdoes harden the boot process (Secure Boot, vTPM, integrity monitoring) but does not restrict which image project a VM can use. A user could still select an unapproved public or custom image, defeating the “trusted image catalogue” requirement. Google Cloud
III. Trigger a Cloud Function when a VM is created and check whether its image is current/not deprecated
This is reactive rather than preventive—you discover non-compliant images after the VM exists.
Even for approved images, keeping a function up-to-date with deprecation states (see the “Deprecate a custom image” workflow) adds operational overhead with no guarantee that every edge case is caught. Google Cloud
IV. Run a vulnerability scanner against the image repository
Artifact Analysis and other scanners are excellent for container images, but they don’t stop someone from instantiating a VM from a different, unscanned project. Scanning alone offers no enforcement mechanism; coupling it with the trusted-image org policy is a best practice, but by itself it doesn’t satisfy the “use only approved images” control. Google Cloud
Because option I enforces the policy before any VM is created and is purpose-built for exactly this use case, it’s the only action that efficiently guarantees all VMs migrate with approved, security-compliant operating-system images.
Question 9 Single Choice
The security operations team at CyberNexa Corp wants to centralize log collection from all development projects for analysis in their SIEM system. These development projects are grouped under the NONPROD folder alongside test and pre-production environments. All of these projects are billed under the XYZ-BILLING account.
What is the most effective logging export approach to provide a unified log view in the SIEM?
Explanation

Click "Show Answer" to see the explanation here
✅ Correct Answer: A
Create a logging sink at the folders/NONPROD level, set includeChildren to True, and export logs to a Cloud Pub/Sub topic in a centralized SIEM project.
Configure the SIEM system to subscribe to that Pub/Sub topic.
✅ Why Option A is Correct:
Folder-level sink with
includeChildren: trueallows you to capture logs from all current and future projects under theNONPRODfolder automatically.This is the most centralized, scalable, and maintainable approach.
Exporting logs to Cloud Pub/Sub allows the SIEM to ingest logs in near real-time, which is essential for security monitoring.
Centralizing the export configuration at the folder level avoids the overhead of managing individual sinks in each project.
Google Documentation Reference:
Exporting Logs Using Log Sinks
Folder-level sinks with includeChildren
“You can create a sink at the organization or folder level and set includeChildren to true to capture logs from all contained projects and resources.”
❌ Why Other Options Are Incorrect:
❌ Option B: Set up a Cloud Storage sink at the billingAccounts/XYZ-BILLING level
Log sinks cannot be created at the billing account level — GCP does not support billing account-level log sinks.
billingAccounts/is a valid resource for usage logs, not full log exports.Additionally, setting
includeChildren: falsewould not aggregate logs across projects.Storage-based sinks are also not real-time, and less suited for SIEM use cases.
Documentation:
Supported resource types for sinks
“You can create sinks at the project, folder, billing account (usage logs only), and organization levels.”
❌ Option C: Create individual log sinks in each development project that export logs to a centralized Pub/Sub topic
While technically valid, this requires manual setup and maintenance in every individual project.
It’s less scalable and harder to manage, especially if projects are frequently added or removed under the
NONPRODfolder.Folder-level sinks with
includeChildren=true(Option A) achieve the same goal with better automation and manageability.
❌ Option D: Use public buckets and read from them in SIEM
Publicly accessible buckets are a security risk and violate GCP security best practices.
Logs may contain sensitive data, and public exposure is not acceptable for security logging.
Also, this approach is not real-time and is operationally cumbersome.
Documentation Reference:
Best practices for securing Cloud Storage
“Avoid making buckets or objects public unless strictly necessary.”
✅ Final Answer:
A. Create a logging sink at the folders/NONPROD level, set includeChildren to True, and export logs to a Cloud Pub/Sub topic in a centralized SIEM project. Configure the SIEM system to subscribe to that Pub/Sub topic.
This approach ensures centralized, real-time, and secure log export while being scalable and easy to manage.
Explanation
✅ Correct Answer: A
Create a logging sink at the folders/NONPROD level, set includeChildren to True, and export logs to a Cloud Pub/Sub topic in a centralized SIEM project.
Configure the SIEM system to subscribe to that Pub/Sub topic.
✅ Why Option A is Correct:
Folder-level sink with
includeChildren: trueallows you to capture logs from all current and future projects under theNONPRODfolder automatically.This is the most centralized, scalable, and maintainable approach.
Exporting logs to Cloud Pub/Sub allows the SIEM to ingest logs in near real-time, which is essential for security monitoring.
Centralizing the export configuration at the folder level avoids the overhead of managing individual sinks in each project.
Google Documentation Reference:
Exporting Logs Using Log Sinks
Folder-level sinks with includeChildren
“You can create a sink at the organization or folder level and set includeChildren to true to capture logs from all contained projects and resources.”
❌ Why Other Options Are Incorrect:
❌ Option B: Set up a Cloud Storage sink at the billingAccounts/XYZ-BILLING level
Log sinks cannot be created at the billing account level — GCP does not support billing account-level log sinks.
billingAccounts/is a valid resource for usage logs, not full log exports.Additionally, setting
includeChildren: falsewould not aggregate logs across projects.Storage-based sinks are also not real-time, and less suited for SIEM use cases.
Documentation:
Supported resource types for sinks
“You can create sinks at the project, folder, billing account (usage logs only), and organization levels.”
❌ Option C: Create individual log sinks in each development project that export logs to a centralized Pub/Sub topic
While technically valid, this requires manual setup and maintenance in every individual project.
It’s less scalable and harder to manage, especially if projects are frequently added or removed under the
NONPRODfolder.Folder-level sinks with
includeChildren=true(Option A) achieve the same goal with better automation and manageability.
❌ Option D: Use public buckets and read from them in SIEM
Publicly accessible buckets are a security risk and violate GCP security best practices.
Logs may contain sensitive data, and public exposure is not acceptable for security logging.
Also, this approach is not real-time and is operationally cumbersome.
Documentation Reference:
Best practices for securing Cloud Storage
“Avoid making buckets or objects public unless strictly necessary.”
✅ Final Answer:
A. Create a logging sink at the folders/NONPROD level, set includeChildren to True, and export logs to a Cloud Pub/Sub topic in a centralized SIEM project. Configure the SIEM system to subscribe to that Pub/Sub topic.
This approach ensures centralized, real-time, and secure log export while being scalable and easy to manage.
Question 10 Single Choice
You are part of the security team at AlphaTrust Corp. Currently, your team operates a single GCP project that hosts both credit card payment systems and non-sensitive workloads such as web applications and data processing services. To meet compliance goals, you aim to reduce the PCI DSS audit scope as much as possible.
What action should you take?
Explanation

Click "Show Answer" to see the explanation here
✅ Correct Answer: C. Relocate the cardholder data environment (CDE) into a separate Google Cloud project.
✅ Explanation:
To minimize PCI DSS audit scope, the most effective strategy is to isolate the Cardholder Data Environment (CDE) from other systems. In Google Cloud, this is best achieved by placing the CDE in its own GCP project.
By doing this:
You achieve clear segmentation between sensitive (PCI-scoped) and non-sensitive workloads.
Access controls, audit logs, VPCs, IAM roles, and security policies can be independently managed.
It restricts the scope of the PCI DSS audit to only the project containing the CDE, reducing compliance overhead.
Official Documentation Reference:
Google Cloud PCI DSS Overview
PCI DSS Scoping Guidance
“Systems that store, process, or transmit cardholder data are in scope for PCI DSS. Proper segmentation can reduce the number of systems in scope.”
❌ Justifications for Incorrect Options:
A. Enforce multi-factor authentication (MFA) for administrative access to the web application
While MFA is a PCI DSS requirement (8.3) for administrative access, it does not reduce audit scope.
MFA enhances security, but both CDE and non-CDE systems would still be within scope if hosted together.
B. Ensure that only PA-DSS certified applications are used for payment handling
Using PA-DSS certified applications is a best practice for software compliance, but it does not reduce infrastructure scope.
The project hosting these applications still requires full PCI compliance.
D. Use a VPN to secure all traffic between the corporate network and the cloud environment
Encrypting traffic (e.g., via VPN or TLS) is a PCI requirement for securing data in transit.
However, like MFA, this does not isolate the CDE, and therefore does not reduce audit scope.
✅ Final Answer:
C. Relocate the cardholder data environment (CDE) into a separate Google Cloud project.
This approach effectively segments the CDE, minimizing PCI DSS scope and allowing clearer security boundaries between sensitive and non-sensitive workloads.
References:
https://cloud.google.com/solutions/pci-dss-compliance-in-gcp.
Explanation
✅ Correct Answer: C. Relocate the cardholder data environment (CDE) into a separate Google Cloud project.
✅ Explanation:
To minimize PCI DSS audit scope, the most effective strategy is to isolate the Cardholder Data Environment (CDE) from other systems. In Google Cloud, this is best achieved by placing the CDE in its own GCP project.
By doing this:
You achieve clear segmentation between sensitive (PCI-scoped) and non-sensitive workloads.
Access controls, audit logs, VPCs, IAM roles, and security policies can be independently managed.
It restricts the scope of the PCI DSS audit to only the project containing the CDE, reducing compliance overhead.
Official Documentation Reference:
Google Cloud PCI DSS Overview
PCI DSS Scoping Guidance
“Systems that store, process, or transmit cardholder data are in scope for PCI DSS. Proper segmentation can reduce the number of systems in scope.”
❌ Justifications for Incorrect Options:
A. Enforce multi-factor authentication (MFA) for administrative access to the web application
While MFA is a PCI DSS requirement (8.3) for administrative access, it does not reduce audit scope.
MFA enhances security, but both CDE and non-CDE systems would still be within scope if hosted together.
B. Ensure that only PA-DSS certified applications are used for payment handling
Using PA-DSS certified applications is a best practice for software compliance, but it does not reduce infrastructure scope.
The project hosting these applications still requires full PCI compliance.
D. Use a VPN to secure all traffic between the corporate network and the cloud environment
Encrypting traffic (e.g., via VPN or TLS) is a PCI requirement for securing data in transit.
However, like MFA, this does not isolate the CDE, and therefore does not reduce audit scope.
✅ Final Answer:
C. Relocate the cardholder data environment (CDE) into a separate Google Cloud project.
This approach effectively segments the CDE, minimizing PCI DSS scope and allowing clearer security boundaries between sensitive and non-sensitive workloads.
References:
https://cloud.google.com/solutions/pci-dss-compliance-in-gcp.



