
Professional Cloud DevOps Engineer - Google Cloud Certified Exam Questions
Total Questions
Last Updated
1st Try Guaranteed

Experts Verified
Question 11 Single Choice
You are building and running client applications in Cloud Run and Cloud Functions. Your client requires that all logs be available for one year so that they can import the logs into their logging service. You must minimize the required code changes.
What should you do?
Explanation

Click "Show Answer" to see the explanation here
Incorrect Answers:
A. Update all images in Cloud Run and all functions in Cloud Functions to send logs to both Cloud Logging and the client's logging service. Ensure that all the ports required to send logs are open in the VPC firewall.
This approach requires significant code changes to implement dual logging capabilities in each application and function. Managing open ports in the VPC firewall to enable log sending adds complexity and potential security concerns. While it could meet the retention requirement, it involves considerable operational overhead and deviates from the goal of minimizing code changes.
B. Create a Pub/Sub topic, subscription, and logging sink. Configure the logging sink to send all logs to the topic. Give your client access to the topic to retrieve the logs.
Setting up a Pub/Sub topic and subscription with a logging sink captures logs without needing application code changes.
While Pub/Sub is efficient for real-time log streaming, it doesn't inherently provide long-term storage or direct retention management.
The client would need to actively pull logs from Pub/Sub and manage their own storage and retention, which could introduce complexity on their end.
C. Create a storage bucket and appropriate VPC firewall rules. Update all images in Cloud Run and all functions in Cloud Functions to send logs to a file within the storage bucket.
This involves updating all images and functions to send logs to a Cloud Storage bucket, which requires significant code changes.
Managing firewall rules for log transmission can introduce security risks and additional management overhead.
This approach diverts from the objective of minimizing code changes and adds complexity in log management.
Correct answer:
D. Create a logs bucket and a logging sink. Set the retention on the logs bucket to 365 days. Configure the logging sink to send logs to the bucket. Give your client access to the bucket to retrieve the logs.
This option leverages Cloud Logging to capture logs and a logging sink to automatically send them to a Cloud Storage bucket. Setting a 365-day retention policy on the bucket ensures that logs are stored for the required duration. It minimizes code changes, as log capturing and exporting are handled at the infrastructure level, not within the application code. Providing client access to the bucket for log retrieval is straightforward and secure, aligning with the requirement for minimal operational overhead. In summary, while Options A and C can achieve the goal, they require significant code changes and add operational complexity.
Option B provides a real-time log streaming solution but lacks inherent long-term storage and retention management.
Option D stands out as the most efficient and straightforward approach, aligning with best practices for cloud-native log management, minimizing code changes, and ensuring the required log retention.
Links:
https://cloud.google.com/logging/docs/routing/overview#:~:text=URL%3A%20https%3A%2F%2Fcloud
Explanation
Incorrect Answers:
A. Update all images in Cloud Run and all functions in Cloud Functions to send logs to both Cloud Logging and the client's logging service. Ensure that all the ports required to send logs are open in the VPC firewall.
This approach requires significant code changes to implement dual logging capabilities in each application and function. Managing open ports in the VPC firewall to enable log sending adds complexity and potential security concerns. While it could meet the retention requirement, it involves considerable operational overhead and deviates from the goal of minimizing code changes.
B. Create a Pub/Sub topic, subscription, and logging sink. Configure the logging sink to send all logs to the topic. Give your client access to the topic to retrieve the logs.
Setting up a Pub/Sub topic and subscription with a logging sink captures logs without needing application code changes.
While Pub/Sub is efficient for real-time log streaming, it doesn't inherently provide long-term storage or direct retention management.
The client would need to actively pull logs from Pub/Sub and manage their own storage and retention, which could introduce complexity on their end.
C. Create a storage bucket and appropriate VPC firewall rules. Update all images in Cloud Run and all functions in Cloud Functions to send logs to a file within the storage bucket.
This involves updating all images and functions to send logs to a Cloud Storage bucket, which requires significant code changes.
Managing firewall rules for log transmission can introduce security risks and additional management overhead.
This approach diverts from the objective of minimizing code changes and adds complexity in log management.
Correct answer:
D. Create a logs bucket and a logging sink. Set the retention on the logs bucket to 365 days. Configure the logging sink to send logs to the bucket. Give your client access to the bucket to retrieve the logs.
This option leverages Cloud Logging to capture logs and a logging sink to automatically send them to a Cloud Storage bucket. Setting a 365-day retention policy on the bucket ensures that logs are stored for the required duration. It minimizes code changes, as log capturing and exporting are handled at the infrastructure level, not within the application code. Providing client access to the bucket for log retrieval is straightforward and secure, aligning with the requirement for minimal operational overhead. In summary, while Options A and C can achieve the goal, they require significant code changes and add operational complexity.
Option B provides a real-time log streaming solution but lacks inherent long-term storage and retention management.
Option D stands out as the most efficient and straightforward approach, aligning with best practices for cloud-native log management, minimizing code changes, and ensuring the required log retention.
Links:
https://cloud.google.com/logging/docs/routing/overview#:~:text=URL%3A%20https%3A%2F%2Fcloud
Question 12 Single Choice
Your team of Infrastructure DevOps Engineers is expanding, and you have initiated the use of Terraform for infrastructure management. You require a method for implementing code versioning and sharing code with other team members.
What steps should you take?
Explanation

Click "Show Answer" to see the explanation here
Incorrect Answers:
B. Store the Terraform code in a network shared folder with child folders for each version release. Ensure that everyone works on different files.
Storing Terraform code in a network shared folder with versioned child folders is less efficient. It lacks the sophisticated version control, collaboration features, and could lead to confusion or conflicts if multiple people work on the same files.
C. Store the Terraform code in a Cloud Storage bucket using object versioning. Give access to the bucket to every team member so they can download the files.
Using a Cloud Storage bucket with object versioning can maintain different versions of files, but it's not optimal for collaboration. It lacks features like branching, pull requests, and code review processes that are essential in a team environment.
D. Store the Terraform code in a shared Google Drive folder so it syncs automatically to every team member's computer. Organize files with a naming convention that identifies each new version.
Storing code in a shared Google Drive folder offers automatic syncing but is not suitable for code management. It lacks proper version control, diff viewing, and other collaborative features necessary for development teams.
Correct answer:
A. Store the Terraform code in a version-control system. Establish procedures for pushing new versions and merging with the master.
This is the most effective approach. Version-control systems like Git provide robust tools for tracking changes, managing versions, and collaborating. Establishing procedures for pushing new versions and merging with the master branch ensures a structured and controlled process.
Links:
https://www.terraform.io/docs/cloud/guides/recommended-practices/part3.3.html
Explanation
Incorrect Answers:
B. Store the Terraform code in a network shared folder with child folders for each version release. Ensure that everyone works on different files.
Storing Terraform code in a network shared folder with versioned child folders is less efficient. It lacks the sophisticated version control, collaboration features, and could lead to confusion or conflicts if multiple people work on the same files.
C. Store the Terraform code in a Cloud Storage bucket using object versioning. Give access to the bucket to every team member so they can download the files.
Using a Cloud Storage bucket with object versioning can maintain different versions of files, but it's not optimal for collaboration. It lacks features like branching, pull requests, and code review processes that are essential in a team environment.
D. Store the Terraform code in a shared Google Drive folder so it syncs automatically to every team member's computer. Organize files with a naming convention that identifies each new version.
Storing code in a shared Google Drive folder offers automatic syncing but is not suitable for code management. It lacks proper version control, diff viewing, and other collaborative features necessary for development teams.
Correct answer:
A. Store the Terraform code in a version-control system. Establish procedures for pushing new versions and merging with the master.
This is the most effective approach. Version-control systems like Git provide robust tools for tracking changes, managing versions, and collaborating. Establishing procedures for pushing new versions and merging with the master branch ensures a structured and controlled process.
Links:
https://www.terraform.io/docs/cloud/guides/recommended-practices/part3.3.html
Question 13 Single Choice
You have deployed a stateless application within a sizable Google Kubernetes Engine (GKE) cluster using the Standard configuration. This application runs multiple pods concurrently and experiences varying levels of incoming traffic. Your primary objectives are to maintain a consistent user experience regardless of fluctuations in traffic and to optimize resource utilization within the cluster.
What should you do?
Explanation

Click "Show Answer" to see the explanation here
Incorrect Answers:
A. Configure a cron job to scale the deployment on a schedule
Scheduling scaling operations can be useful for predictable load patterns but may not adapt well to unexpected traffic spikes or drops. This approach lacks the dynamic responsiveness needed for fluctuating traffic levels.
C. Configure a Vertical Pod Autoscaler
VPA automatically adjusts the CPU and memory allocations of pods, but not their count.
This option is less suited for handling fluctuating traffic since it doesn't scale the number of pods.
D. Configure cluster autoscaling on the node pool.
This adjusts the number of nodes in the cluster based on the demands of the pods. While it helps optimize resource utilization at the node level, it doesn’t directly scale the number of pods like HPA.
Correct answer:
B. Configure a Horizontal Pod Autoscaler.
HPA dynamically adjusts the number of pods in a deployment based on real-time metrics like CPU utilization.
Ideal for applications with varying levels of traffic, ensuring resources are efficiently used and the user experience remains consistent.
For this scenario, option B (HPA) is the most suitable as it directly addresses the need to adapt to varying traffic levels by scaling the number of pods.
Links:
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
Explanation
Incorrect Answers:
A. Configure a cron job to scale the deployment on a schedule
Scheduling scaling operations can be useful for predictable load patterns but may not adapt well to unexpected traffic spikes or drops. This approach lacks the dynamic responsiveness needed for fluctuating traffic levels.
C. Configure a Vertical Pod Autoscaler
VPA automatically adjusts the CPU and memory allocations of pods, but not their count.
This option is less suited for handling fluctuating traffic since it doesn't scale the number of pods.
D. Configure cluster autoscaling on the node pool.
This adjusts the number of nodes in the cluster based on the demands of the pods. While it helps optimize resource utilization at the node level, it doesn’t directly scale the number of pods like HPA.
Correct answer:
B. Configure a Horizontal Pod Autoscaler.
HPA dynamically adjusts the number of pods in a deployment based on real-time metrics like CPU utilization.
Ideal for applications with varying levels of traffic, ensuring resources are efficiently used and the user experience remains consistent.
For this scenario, option B (HPA) is the most suitable as it directly addresses the need to adapt to varying traffic levels by scaling the number of pods.
Links:
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
Question 14 Single Choice
You are facing a significant volume of outages within the production systems you are responsible for. These outages trigger alerts, mainly because of unhealthy systems that are automatically restarted within a minute.
To address this situation and adhere to Site Reliability Engineering (SRE) practices, what should you do?
Explanation

Click "Show Answer" to see the explanation here
Incorrect Answers:
B. Redefine the related SLO so that the error budget is not exhausted
Adjusting the Service Level Objective (SLO) could make it more realistic or aligned with the current capabilities of the system. However, this doesn't solve the underlying issues causing the outages. It's more about managing expectations rather than improving system stability.
C. Distribute the alerts to engineers in different time zones
This approach can ensure 24/7 coverage and immediate responses to alerts. However, it doesn't address the root cause of the alerts and may lead to distributed teams being overwhelmed if the volume of alerts is high.
D. Create an incident report for each of the alerts
Documenting incidents is a good practice for post-mortem analysis and learning. However, if the system is generating a significant volume of alerts, this might be impractical and could overwhelm the team with documentation tasks rather than resolving the issues.
Correct answer:
A. Eliminate alerts that are not actionable
This step ensures that the team focuses on alerts that require immediate attention or indicate real issues. Non-actionable alerts can lead to alert fatigue and distract from addressing actual system problems. Option A, eliminating non-actionable alerts, is the most effective immediate approach, as it focuses on reducing alert fatigue and ensuring that attention is given to genuine and critical issues.
Links:
Explanation
Incorrect Answers:
B. Redefine the related SLO so that the error budget is not exhausted
Adjusting the Service Level Objective (SLO) could make it more realistic or aligned with the current capabilities of the system. However, this doesn't solve the underlying issues causing the outages. It's more about managing expectations rather than improving system stability.
C. Distribute the alerts to engineers in different time zones
This approach can ensure 24/7 coverage and immediate responses to alerts. However, it doesn't address the root cause of the alerts and may lead to distributed teams being overwhelmed if the volume of alerts is high.
D. Create an incident report for each of the alerts
Documenting incidents is a good practice for post-mortem analysis and learning. However, if the system is generating a significant volume of alerts, this might be impractical and could overwhelm the team with documentation tasks rather than resolving the issues.
Correct answer:
A. Eliminate alerts that are not actionable
This step ensures that the team focuses on alerts that require immediate attention or indicate real issues. Non-actionable alerts can lead to alert fatigue and distract from addressing actual system problems. Option A, eliminating non-actionable alerts, is the most effective immediate approach, as it focuses on reducing alert fatigue and ensuring that attention is given to genuine and critical issues.
Links:
Question 15 Single Choice
You are responsible for a high-traffic web application and are keen to guarantee that the home page loads quickly. To initiate this process, you intend to establish a Service Level Indicator (SLI) that measures the latency of home page requests, with an acceptable page load time target of 100 ms.
What is the Google-recommended approach for calculating this SLI?
Explanation

Click "Show Answer" to see the explanation here
Incorrect Answers:
A. Bucketize the request latencies into ranges, and then compute the percentile at 100 ms.
While bucketizing request latencies is useful for detailed analysis, focusing solely on a specific percentile (100 ms) may not provide a comprehensive view of overall performance.
B. Bucketize the request latencies into ranges, and then compute the median and 90th percentiles.
Offers a broader view of performance but may not directly align with the specific goal of ensuring home page loads under 100 ms.
D. Count the number of home page request that load in under 100 ms, and then divide by the total number of all web application requests.
This dilutes the specific focus on home page performance by including all web application requests in the calculation.
Correct answer:
C. Count the number of home page requests that load in under 100 ms, and then divide by the total number of home page requests.
Directly measures how many requests meet the specific target, providing a clear and focused assessment of performance against the set goal.
Links:
Explanation
Incorrect Answers:
A. Bucketize the request latencies into ranges, and then compute the percentile at 100 ms.
While bucketizing request latencies is useful for detailed analysis, focusing solely on a specific percentile (100 ms) may not provide a comprehensive view of overall performance.
B. Bucketize the request latencies into ranges, and then compute the median and 90th percentiles.
Offers a broader view of performance but may not directly align with the specific goal of ensuring home page loads under 100 ms.
D. Count the number of home page request that load in under 100 ms, and then divide by the total number of all web application requests.
This dilutes the specific focus on home page performance by including all web application requests in the calculation.
Correct answer:
C. Count the number of home page requests that load in under 100 ms, and then divide by the total number of home page requests.
Directly measures how many requests meet the specific target, providing a clear and focused assessment of performance against the set goal.
Links:
Question 16 Single Choice
You are in the process of deploying an application to Cloud Run, and this application necessitates a password for startup. Your organization mandates the rotation of all passwords every 24 hours, and your application must always use the most recent password.
To achieve this without causing any downtime, what step should you take?
Explanation

Click "Show Answer" to see the explanation here
Incorrect Answers:
A. Store the password in Secret Manager and send the secret to the application by using environment variables.
Is not recommended in your case primarily because environment variables are resolved at instance startup. This means if you rotate your password every 24 hours as required, the environment variable method would not automatically update the password in the running application without a restart. In contrast, mounting the secret as a volume (Option B) allows the application to continuously access the most recent version of the secret, which is crucial for compliance with your organization's policy of daily password rotation.
C. Use Cloud Build to add your password into the application container at build time. Ensure that Artifact Registry is secured from public access.
Integrating the password at build time would necessitate frequent rebuilds and redeployments, which is less efficient for daily password rotations.
D. Store the password directly in the code. Use Cloud Build to rebuild and deploy the application each time the password changes.
Storing sensitive information directly in the code is a security risk and goes against best practices for secret management. It also requires frequent redeployments for updates.
Correct answer:
B. Store the password in Secret Manager and mount the secret as a volume within the application.
This approach ensures that the application always has access to the latest version of the secret directly from Secret Manager. It's particularly effective for frequently rotated secrets, as the application can read the most current secret without restarting.
Links:
https://cloud.google.com/run/docs/configuring/services/secrets
Explanation
Incorrect Answers:
A. Store the password in Secret Manager and send the secret to the application by using environment variables.
Is not recommended in your case primarily because environment variables are resolved at instance startup. This means if you rotate your password every 24 hours as required, the environment variable method would not automatically update the password in the running application without a restart. In contrast, mounting the secret as a volume (Option B) allows the application to continuously access the most recent version of the secret, which is crucial for compliance with your organization's policy of daily password rotation.
C. Use Cloud Build to add your password into the application container at build time. Ensure that Artifact Registry is secured from public access.
Integrating the password at build time would necessitate frequent rebuilds and redeployments, which is less efficient for daily password rotations.
D. Store the password directly in the code. Use Cloud Build to rebuild and deploy the application each time the password changes.
Storing sensitive information directly in the code is a security risk and goes against best practices for secret management. It also requires frequent redeployments for updates.
Correct answer:
B. Store the password in Secret Manager and mount the secret as a volume within the application.
This approach ensures that the application always has access to the latest version of the secret directly from Secret Manager. It's particularly effective for frequently rotated secrets, as the application can read the most current secret without restarting.
Links:
https://cloud.google.com/run/docs/configuring/services/secrets
Question 17 Single Choice
You are responsible for a trading application written in Python and hosted on the App Engine flexible environment. Your objective is to customize the error information that is sent to Cloud Error Reporting (formerly known as Stackdriver Error Reporting).
What steps should you take?
Explanation

Click "Show Answer" to see the explanation here
Incorrect Answers:
A. Install the Cloud Error Reporting library for Python and execute your code on a Compute Engine VM.
While this could work, it involves moving your application from App Engine to Compute Engine, which may not align with your current infrastructure and could introduce unnecessary complexity.
B. Install the Cloud Error Reporting library for Python and run your code on Google Kubernetes Engine.
Similar to Option A, this involves shifting your deployment platform to GKE, which might not be needed if your application is already optimized for App Engine.
D. Utilize the Cloud Error Reporting API to direct errors from your application to ReportedErrorEvent and then generate log entries with appropriately formatted error messages in Cloud Logging (formerly known as Stackdriver Logging).
Utilizing the API directly for custom error reporting offers more control over error logging. However, it requires a deeper integration and understanding of the API, potentially increasing the complexity of error handling in your application.
Correct answer:
C. Install the Cloud Error Reporting library for Python and run your code on the App Engine flexible environment.
This step allows for direct integration of Cloud Error Reporting within your application environment, enabling the customization of error information before it is sent to Cloud Error Reporting. It aligns well with the App Engine flexible environment and the Python language of your application.
Links:
https://cloud.google.com/error-reporting/docs/setup/python#app-engine
https://cloud.google.com/error-reporting/docs/setup/app-engine-flexible-environment
Explanation
Incorrect Answers:
A. Install the Cloud Error Reporting library for Python and execute your code on a Compute Engine VM.
While this could work, it involves moving your application from App Engine to Compute Engine, which may not align with your current infrastructure and could introduce unnecessary complexity.
B. Install the Cloud Error Reporting library for Python and run your code on Google Kubernetes Engine.
Similar to Option A, this involves shifting your deployment platform to GKE, which might not be needed if your application is already optimized for App Engine.
D. Utilize the Cloud Error Reporting API to direct errors from your application to ReportedErrorEvent and then generate log entries with appropriately formatted error messages in Cloud Logging (formerly known as Stackdriver Logging).
Utilizing the API directly for custom error reporting offers more control over error logging. However, it requires a deeper integration and understanding of the API, potentially increasing the complexity of error handling in your application.
Correct answer:
C. Install the Cloud Error Reporting library for Python and run your code on the App Engine flexible environment.
This step allows for direct integration of Cloud Error Reporting within your application environment, enabling the customization of error information before it is sent to Cloud Error Reporting. It aligns well with the App Engine flexible environment and the Python language of your application.
Links:
https://cloud.google.com/error-reporting/docs/setup/python#app-engine
https://cloud.google.com/error-reporting/docs/setup/app-engine-flexible-environment
Question 18 Single Choice
You are configuring connectivity between Google Kubernetes Engine (GKE) clusters in different VPCs. You've observed that nodes in Cluster A cannot access nodes in Cluster B. You suspect that this issue is related to network configuration and need to troubleshoot it. However, you don't have execution access to workloads and nodes. Your objective is to identify the specific network layer at which connectivity is broken.
What should you do?
Explanation

Click "Show Answer" to see the explanation here
Incorrect Answers:
A. Install a toolbox container on the node in Cluster Confirm that the routes to Cluster B are configured appropriately.
This option involves deploying a toolbox container directly on the nodes to inspect the routes. However, since you mentioned that you don't have execution access to workloads and nodes, this method isn't feasible in your situation. It requires a level of access that you do not have.
C. Use a debug container to run the traceroute command from Cluster A to Cluster B and from Cluster B to Cluster A. Identify the common failure point.
Similar to option A, this approach also requires deploying a container and executing commands within it. Running traceroute would indeed help in identifying the network path and where the connectivity breaks down. However, again, because you lack execution access to workloads and nodes, this option is not practical in your case.
D. Enable VPC Flow Logs in both VPCs, and monitor packet drops.
Enabling VPC Flow Logs and monitoring for packet drops is a valid method for diagnosing network issues. However, it provides a more general view of the network traffic and might not be as directly useful for troubleshooting connectivity between specific GKE clusters. This approach is more suitable for broader network traffic analysis and might require extensive filtering and analysis to pinpoint the specific issue between the two clusters.
Correct answer:
B. Use Network Connectivity Center to perform a Connectivity Test from Cluster A to Cluster B.
Network Connectivity Center in Google Cloud provides a way to create, connect, and manage heterogeneous on-prem and cloud networks. It includes a feature to perform connectivity tests, which can help in identifying network configuration issues across different VPCs. This tool allows you to test and diagnose the network path and configuration between two endpoints (in this case, between nodes in Cluster A and Cluster B), providing insights into where the connectivity might be breaking.
Using the Network Connectivity Center to perform a Connectivity Test, remains the most suitable choice for your scenario. It provides a direct and feasible way to test and diagnose the connectivity issues between the two GKE clusters without requiring execution access to the nodes or workloads.
Links:
https://cloud.google.com/network-intelligence-center/docs/connectivity-tests/concepts/overview
Explanation
Incorrect Answers:
A. Install a toolbox container on the node in Cluster Confirm that the routes to Cluster B are configured appropriately.
This option involves deploying a toolbox container directly on the nodes to inspect the routes. However, since you mentioned that you don't have execution access to workloads and nodes, this method isn't feasible in your situation. It requires a level of access that you do not have.
C. Use a debug container to run the traceroute command from Cluster A to Cluster B and from Cluster B to Cluster A. Identify the common failure point.
Similar to option A, this approach also requires deploying a container and executing commands within it. Running traceroute would indeed help in identifying the network path and where the connectivity breaks down. However, again, because you lack execution access to workloads and nodes, this option is not practical in your case.
D. Enable VPC Flow Logs in both VPCs, and monitor packet drops.
Enabling VPC Flow Logs and monitoring for packet drops is a valid method for diagnosing network issues. However, it provides a more general view of the network traffic and might not be as directly useful for troubleshooting connectivity between specific GKE clusters. This approach is more suitable for broader network traffic analysis and might require extensive filtering and analysis to pinpoint the specific issue between the two clusters.
Correct answer:
B. Use Network Connectivity Center to perform a Connectivity Test from Cluster A to Cluster B.
Network Connectivity Center in Google Cloud provides a way to create, connect, and manage heterogeneous on-prem and cloud networks. It includes a feature to perform connectivity tests, which can help in identifying network configuration issues across different VPCs. This tool allows you to test and diagnose the network path and configuration between two endpoints (in this case, between nodes in Cluster A and Cluster B), providing insights into where the connectivity might be breaking.
Using the Network Connectivity Center to perform a Connectivity Test, remains the most suitable choice for your scenario. It provides a direct and feasible way to test and diagnose the connectivity issues between the two GKE clusters without requiring execution access to the nodes or workloads.
Links:
https://cloud.google.com/network-intelligence-center/docs/connectivity-tests/concepts/overview
Question 19 Single Choice
Your application services are hosted on Google Kubernetes Engine (GKE). You aim to ensure that only images from your centrally-managed Google Container Registry (GCR) image repository in the 'altostrat-images' project can be utilized for deployment to the cluster, all while minimizing development time.
What should you do?
Explanation

Click "Show Answer" to see the explanation here
Incorrect Answers:
A. Create a custom builder for Cloud Build that will only push images to gcr.io/altostrat-images.
While a custom builder in Cloud Build can enforce that images are only pushed to a specific GCR repository, it does not prevent deployment of images from other repositories to the GKE cluster. This option does not fulfill the requirement of restricting deployments to images only from the specified repository.
C. Add logic to the deployment pipeline to check that all manifests contain only images from gcr.io/altostrat-images.
Adding custom logic to the deployment pipeline would ensure that only images from the specified repository are used. However, this requires additional development and maintenance efforts and is not as streamlined or centralized as using Binary Authorization. This approach is less efficient and more prone to human error.
D. Add a tag to each image in gcr.io/altostrat-images and check that this tag is present when the image is deployed.
Tagging images in the GCR repository and verifying these tags during deployment can be a way to ensure that only approved images are deployed. However, this method is less secure and more cumbersome than using Binary Authorization. It requires manual checks and does not provide a centralized enforcement mechanism like Binary Authorization.
Correct answer:
B. Use a Binary Authorization policy that includes the whitelist name pattern gcr.io/altostrat-images/.
Binary Authorization is a GCP service that integrates with GKE to enforce deploy-time security controls, ensuring that only trusted container images are deployed. By creating a Binary Authorization policy that whitelists the specific GCR repository (i.e., gcr.io/altostrat-images/), you can ensure that only images from this repository are allowed to be deployed. This approach directly aligns with the goal of ensuring secure and trusted deployments with minimal additional development effort.
Links:
https://cloud.google.com/binary-authorization/docs/example-policies
https://cloud.google.com/binary-authorization/docs/cloud-build
Explanation
Incorrect Answers:
A. Create a custom builder for Cloud Build that will only push images to gcr.io/altostrat-images.
While a custom builder in Cloud Build can enforce that images are only pushed to a specific GCR repository, it does not prevent deployment of images from other repositories to the GKE cluster. This option does not fulfill the requirement of restricting deployments to images only from the specified repository.
C. Add logic to the deployment pipeline to check that all manifests contain only images from gcr.io/altostrat-images.
Adding custom logic to the deployment pipeline would ensure that only images from the specified repository are used. However, this requires additional development and maintenance efforts and is not as streamlined or centralized as using Binary Authorization. This approach is less efficient and more prone to human error.
D. Add a tag to each image in gcr.io/altostrat-images and check that this tag is present when the image is deployed.
Tagging images in the GCR repository and verifying these tags during deployment can be a way to ensure that only approved images are deployed. However, this method is less secure and more cumbersome than using Binary Authorization. It requires manual checks and does not provide a centralized enforcement mechanism like Binary Authorization.
Correct answer:
B. Use a Binary Authorization policy that includes the whitelist name pattern gcr.io/altostrat-images/.
Binary Authorization is a GCP service that integrates with GKE to enforce deploy-time security controls, ensuring that only trusted container images are deployed. By creating a Binary Authorization policy that whitelists the specific GCR repository (i.e., gcr.io/altostrat-images/), you can ensure that only images from this repository are allowed to be deployed. This approach directly aligns with the goal of ensuring secure and trusted deployments with minimal additional development effort.
Links:
https://cloud.google.com/binary-authorization/docs/example-policies
https://cloud.google.com/binary-authorization/docs/cloud-build
Question 20 Single Choice
You are responsible for overseeing an application that provides an HTTP endpoint without utilizing a load balancer. The latency of the HTTP responses plays a crucial role in ensuring a positive user experience. You aim to gain insights into the HTTP latencies experienced by all of your users and utilize Cloud Monitoring for this purpose.
What steps should you take?
Explanation

Click "Show Answer" to see the explanation here
Incorrect Answers:
Option A: While a stacked bar graph could visualize certain metrics, latency is more effectively represented using distributions, as it provides insights into the range and variance of response times.
Option B: Cumulative metrics would result in an ever-increasing graph, which is not suitable for tracking individual latency instances. It wouldn't accurately reflect the behavior of HTTP response times.
Option D: Using METRIC_KIND_UNSPECIFIED is not appropriate for specific measurements like latency. It lacks the precision required for monitoring and analyzing latency metrics effectively.
Correct answer:
Option C: Latency is commonly measured as a distribution, which allows for analyzing various percentiles, such as identifying the number of requests slower than the historical 99th percentile. This approach provides a comprehensive view of latency performance.
Links:
https://cloud.google.com/monitoring/api/v3/kinds-and-types
Explanation
Incorrect Answers:
Option A: While a stacked bar graph could visualize certain metrics, latency is more effectively represented using distributions, as it provides insights into the range and variance of response times.
Option B: Cumulative metrics would result in an ever-increasing graph, which is not suitable for tracking individual latency instances. It wouldn't accurately reflect the behavior of HTTP response times.
Option D: Using METRIC_KIND_UNSPECIFIED is not appropriate for specific measurements like latency. It lacks the precision required for monitoring and analyzing latency metrics effectively.
Correct answer:
Option C: Latency is commonly measured as a distribution, which allows for analyzing various percentiles, such as identifying the number of requests slower than the historical 99th percentile. This approach provides a comprehensive view of latency performance.
Links:
https://cloud.google.com/monitoring/api/v3/kinds-and-types



