

Professional Cloud Developer - Google Cloud Certified Exam Questions
Total Questions
Last Updated
1st Try Guaranteed

Experts Verified
Question 11 Multiple Choice
Which storage solution would best fulfill the requirements of transferring log data to a Cloud Storage bucket daily, allowing authenticated users to review logs from the prior two weeks for critical events and enabling annual reviews by an external auditor while minimizing costs? Choose two options from the following:
Explanation

Click "Show Answer" to see the explanation here
The two storage solutions that would best fulfill the requirements of transferring log data to a Cloud Storage bucket daily, allowing authenticated users to review logs from the prior two weeks for critical events and enabling annual reviews by an external auditor while minimizing costs are:
I. Implement the Bucket Lock feature to enforce a retention policy on the data.
IV. Establish a lifecycle management policy to transition the storage class to Coldline for objects older than 14 days.
Explanation:
Option I suggests implementing the Bucket Lock feature to enforce a retention policy on the data. This ensures that the log data cannot be modified or deleted for a specified period, which aligns with the requirement for allowing authenticated users to review logs from the prior two weeks and enabling annual reviews by an external auditor.
Option IV proposes establishing a lifecycle management policy to transition the storage class to Coldline for objects older than 14 days. This allows for cost-effective storage of older log data while ensuring accessibility for the required duration and minimizing costs by moving data to a lower-cost storage class after the two-week review period.
Option II suggests transitioning the storage class to Coldline for objects older than 14 days, which aligns with the requirement for minimizing costs but does not address the requirement for authenticated users to review logs or enable annual reviews by an external auditor.
Option III mentions generating a JSON Web Token (JWT) for users necessitating access to the Coldline storage buckets, which addresses access control but does not directly fulfill the requirements related to data storage and management.
Option V proposes transitioning the storage class to Nearline for objects older than 14 days, which might reduce costs but does not ensure that the data remains accessible for the required duration or address the need for annual reviews by an external auditor.
Therefore, Options I and IV are the most suitable choices for fulfilling the specified requirements.
References:
https://cloud.google.com/storage/docs/bucket-lock
https://cloud.google.com/storage/docs/lifecycle
Explanation
The two storage solutions that would best fulfill the requirements of transferring log data to a Cloud Storage bucket daily, allowing authenticated users to review logs from the prior two weeks for critical events and enabling annual reviews by an external auditor while minimizing costs are:
I. Implement the Bucket Lock feature to enforce a retention policy on the data.
IV. Establish a lifecycle management policy to transition the storage class to Coldline for objects older than 14 days.
Explanation:
Option I suggests implementing the Bucket Lock feature to enforce a retention policy on the data. This ensures that the log data cannot be modified or deleted for a specified period, which aligns with the requirement for allowing authenticated users to review logs from the prior two weeks and enabling annual reviews by an external auditor.
Option IV proposes establishing a lifecycle management policy to transition the storage class to Coldline for objects older than 14 days. This allows for cost-effective storage of older log data while ensuring accessibility for the required duration and minimizing costs by moving data to a lower-cost storage class after the two-week review period.
Option II suggests transitioning the storage class to Coldline for objects older than 14 days, which aligns with the requirement for minimizing costs but does not address the requirement for authenticated users to review logs or enable annual reviews by an external auditor.
Option III mentions generating a JSON Web Token (JWT) for users necessitating access to the Coldline storage buckets, which addresses access control but does not directly fulfill the requirements related to data storage and management.
Option V proposes transitioning the storage class to Nearline for objects older than 14 days, which might reduce costs but does not ensure that the data remains accessible for the required duration or address the need for annual reviews by an external auditor.
Therefore, Options I and IV are the most suitable choices for fulfilling the specified requirements.
References:
https://cloud.google.com/storage/docs/bucket-lock
https://cloud.google.com/storage/docs/lifecycle
Question 12 Single Choice
How should you securely manage session and shopping cart data for your ecommerce platform while adhering to Google's recommended standards?
Explanation

Click "Show Answer" to see the explanation here
The most suitable approach for securely managing session and shopping cart data for your ecommerce platform while adhering to Google's recommended standards is:
IV. Implement a comprehensive approach by utilizing Memorystore for Redis to securely store session data and Firestore for the shopping cart information, ensuring data integrity and scalability for your ecommerce platform.
Explanation:
Option IV suggests a comprehensive approach by leveraging Memorystore for Redis to securely store session data and Firestore for shopping cart information. This approach aligns with Google's recommended standards as it provides a secure, scalable, and managed solution for both session and shopping cart data.
Option I mentions storing session and cart data in local memory and enabling cookie-based session affinity within a global external HTTP(S) load balancer. While this might offer performance benefits, it may not provide the same level of security and scalability as managed services like Memorystore for Redis and Firestore.
Option II proposes using Cloud Storage to store shopping cart details, which may not be the most suitable solution for managing session data due to its eventual consistency and lack of real-time querying capabilities.
Option III suggests using BigQuery for both session and shopping cart information. While BigQuery is powerful for analytical purposes, it may not be the most optimal solution for real-time data retrieval and management of session and shopping cart data.
Therefore, Option IV provides a balanced and recommended approach for securely managing session and shopping cart data while ensuring scalability and data integrity for your ecommerce platform.
References:
https://cloud.google.com/memorystore
https://cloud.google.com/datastore/docs
Explanation
The most suitable approach for securely managing session and shopping cart data for your ecommerce platform while adhering to Google's recommended standards is:
IV. Implement a comprehensive approach by utilizing Memorystore for Redis to securely store session data and Firestore for the shopping cart information, ensuring data integrity and scalability for your ecommerce platform.
Explanation:
Option IV suggests a comprehensive approach by leveraging Memorystore for Redis to securely store session data and Firestore for shopping cart information. This approach aligns with Google's recommended standards as it provides a secure, scalable, and managed solution for both session and shopping cart data.
Option I mentions storing session and cart data in local memory and enabling cookie-based session affinity within a global external HTTP(S) load balancer. While this might offer performance benefits, it may not provide the same level of security and scalability as managed services like Memorystore for Redis and Firestore.
Option II proposes using Cloud Storage to store shopping cart details, which may not be the most suitable solution for managing session data due to its eventual consistency and lack of real-time querying capabilities.
Option III suggests using BigQuery for both session and shopping cart information. While BigQuery is powerful for analytical purposes, it may not be the most optimal solution for real-time data retrieval and management of session and shopping cart data.
Therefore, Option IV provides a balanced and recommended approach for securely managing session and shopping cart data while ensuring scalability and data integrity for your ecommerce platform.
References:
https://cloud.google.com/memorystore
https://cloud.google.com/datastore/docs
Question 13 Single Choice
How should you set up the unit testing process for Cloud Function code stored in a Cloud Source Repositories repository, ensuring deployment only occurs after passing the tests?
Explanation

Click "Show Answer" to see the explanation here
The correct option is
IV: Utilize Cloud Build to perform unit tests, employing the specific service account for building; upon test success, Cloud Build automatically deploys the Cloud Function.
Here's why:
Automation & Efficiency: This option streamlines testing and deployment into a single integrated process, saving time and effort.
Enforced Testing: Guarantees code quality because deployment happens only after tests pass. This is critical for preventing broken code from reaching production.
Cloud Build: This Google Cloud service is ideal for building, testing, and deploying cloud-native applications, including Cloud Functions.
Service Account: Using a specific service account with deployment permissions ensures controlled access and security.
Here's why the other options aren't as suitable:
I. Set up Cloud Build to handle Cloud Function deployment, with deployment approval triggered after passing tests. While correct in ensuring that tests run, this option introduces a manual approval step. This can slow down deployment and might be unnecessary in many scenarios.
II. Employ Cloud Build for deploying the Cloud Function, utilizing the specific service account for the build process, and executing unit tests post-deployment. This approach is problematic because it deploys potentially untested code. Executing tests after deployment could reveal issues, necessitating rollbacks and adding complexity.
III. Implement Cloud Build to execute unit tests; upon successful testing, developers can proceed with Cloud Function deployment. This option places the deployment burden on developers and is less efficient than automated deployment through Cloud Build. It risks inconsistent deployment procedures as different developers handle this step.
Explanation
The correct option is
IV: Utilize Cloud Build to perform unit tests, employing the specific service account for building; upon test success, Cloud Build automatically deploys the Cloud Function.
Here's why:
Automation & Efficiency: This option streamlines testing and deployment into a single integrated process, saving time and effort.
Enforced Testing: Guarantees code quality because deployment happens only after tests pass. This is critical for preventing broken code from reaching production.
Cloud Build: This Google Cloud service is ideal for building, testing, and deploying cloud-native applications, including Cloud Functions.
Service Account: Using a specific service account with deployment permissions ensures controlled access and security.
Here's why the other options aren't as suitable:
I. Set up Cloud Build to handle Cloud Function deployment, with deployment approval triggered after passing tests. While correct in ensuring that tests run, this option introduces a manual approval step. This can slow down deployment and might be unnecessary in many scenarios.
II. Employ Cloud Build for deploying the Cloud Function, utilizing the specific service account for the build process, and executing unit tests post-deployment. This approach is problematic because it deploys potentially untested code. Executing tests after deployment could reveal issues, necessitating rollbacks and adding complexity.
III. Implement Cloud Build to execute unit tests; upon successful testing, developers can proceed with Cloud Function deployment. This option places the deployment burden on developers and is less efficient than automated deployment through Cloud Build. It risks inconsistent deployment procedures as different developers handle this step.
Question 14 Single Choice
You want to assess the security vulnerabilities of an application running on App Engine. Which service should you use for this purpose?
Explanation

Click "Show Answer" to see the explanation here
III. Cloud Security Scanner
Here's why it's the right choice:
Vulnerability Scanning: Cloud Security Scanner is designed specifically to scan web applications for common vulnerabilities like cross-site scripting (XSS), SQL injection, and outdated libraries.
App Engine Integration: It seamlessly integrates with App Engine, making it easy to scan your deployed application.
Actionable Reports: Cloud Security Scanner provides detailed reports on discovered vulnerabilities, along with recommendations for remediation.
Why Other Options Are Less Ideal:
I. Cloud Armor Cloud Armor is a web application firewall (WAF) primarily focused on mitigating DDoS attacks and enforcing IP-based allow/deny lists. While it enhances security, it doesn't directly scan for application-level vulnerabilities.
II. Stackdriver Debugger Debugger helps inspect an application's state in production for troubleshooting errors, but doesn't have a security focus.
IV. Stackdriver Error Reporting Error Reporting aids in error analysis, but may indirectly reveal consequences of an exploit rather than identifying the vulnerability itself.
References:
https://cloud.google.com/appengine/docs/standard/python/application-security:
https://cloud.google.com/security-command-center/docs/concepts-web-security-scanner-overview
Explanation
III. Cloud Security Scanner
Here's why it's the right choice:
Vulnerability Scanning: Cloud Security Scanner is designed specifically to scan web applications for common vulnerabilities like cross-site scripting (XSS), SQL injection, and outdated libraries.
App Engine Integration: It seamlessly integrates with App Engine, making it easy to scan your deployed application.
Actionable Reports: Cloud Security Scanner provides detailed reports on discovered vulnerabilities, along with recommendations for remediation.
Why Other Options Are Less Ideal:
I. Cloud Armor Cloud Armor is a web application firewall (WAF) primarily focused on mitigating DDoS attacks and enforcing IP-based allow/deny lists. While it enhances security, it doesn't directly scan for application-level vulnerabilities.
II. Stackdriver Debugger Debugger helps inspect an application's state in production for troubleshooting errors, but doesn't have a security focus.
IV. Stackdriver Error Reporting Error Reporting aids in error analysis, but may indirectly reveal consequences of an exploit rather than identifying the vulnerability itself.
References:
https://cloud.google.com/appengine/docs/standard/python/application-security:
https://cloud.google.com/security-command-center/docs/concepts-web-security-scanner-overview
Question 15 Single Choice
Your company's development teams wish to employ Cloud Build in their projects to build and push Docker images to Container Registry. However, the operations team mandates that all Docker images must be published to a centralized and securely managed Docker registry they oversee. What's the best approach to address this requirement?
Explanation

Click "Show Answer" to see the explanation here
II. Create a dedicated project for the operations team, equipped with Container Registry. Grant appropriate permissions to the Cloud Build service account in each developer team's project to enable access to the operations team's registry.
Here's why this approach excels:
Centralization: A single registry under the operations team's control provides a unified location for managing and governing Docker images.
Security: IAM permissions ensure fine-grained access control to the centralized registry, aligning with operational security requirements.
Service Accounts: Using the Cloud Build service account for authentication is streamlined and avoids the security risks of storing credentials in source code repositories.
Maintainability: Simplifies the setup and reduces complexity for development teams.
Why Other Options Are Less Ideal:
I. Establish Container Registry within each development team's project ... This creates management overhead for the operations team, who would have to maintain access to numerous registries.
III. ... Generate a Service Account for each development team ... Storing the key file in the repository is a significant security risk and creates more management complexity.
IV. Deploy an open-source Docker Registry on a Compute Engine virtual machine instance ... This increases operational overhead compared to using a managed service like Container Registry and introduces additional security considerations in self-managing a registry.
Explanation
II. Create a dedicated project for the operations team, equipped with Container Registry. Grant appropriate permissions to the Cloud Build service account in each developer team's project to enable access to the operations team's registry.
Here's why this approach excels:
Centralization: A single registry under the operations team's control provides a unified location for managing and governing Docker images.
Security: IAM permissions ensure fine-grained access control to the centralized registry, aligning with operational security requirements.
Service Accounts: Using the Cloud Build service account for authentication is streamlined and avoids the security risks of storing credentials in source code repositories.
Maintainability: Simplifies the setup and reduces complexity for development teams.
Why Other Options Are Less Ideal:
I. Establish Container Registry within each development team's project ... This creates management overhead for the operations team, who would have to maintain access to numerous registries.
III. ... Generate a Service Account for each development team ... Storing the key file in the repository is a significant security risk and creates more management complexity.
IV. Deploy an open-source Docker Registry on a Compute Engine virtual machine instance ... This increases operational overhead compared to using a managed service like Container Registry and introduces additional security considerations in self-managing a registry.
Question 16 Single Choice
To enable clients to download a file from your website for a limited period while adhering to Google's recommended best practices, how should you architect the application?
Explanation

Click "Show Answer" to see the explanation here
II. Generate and allocate a Cloud Storage signed URL for the file, then provide the URL to the client for downloading.
Here's why this is the ideal choice:
Secure and Scalable: Signed URLs leverage Cloud Storage's security mechanisms and scale well without impacting your application's workload.
Time-limited Access: You can set an expiration time directly on the signed URL, controlling the download window.
Simplicity: Generating signed URLs is relatively straightforward, reducing implementation complexity.
Google Best Practice: Google recommends signed URLs for this use case.
Let's analyze the other options:
I. Email Attachment: This approach is less scalable and secure, especially for larger files compared to a direct download link.
III. Temporary Bucket: Creating and managing temporary buckets adds unnecessary overhead and complexity.
IV. HTTP Cookies: Cookies are designed to store client-side data and aren't ideal for managing file access and security.
Ref:
https://cloud.google.com/storage/docs/access-control/signed-urls
Explanation
II. Generate and allocate a Cloud Storage signed URL for the file, then provide the URL to the client for downloading.
Here's why this is the ideal choice:
Secure and Scalable: Signed URLs leverage Cloud Storage's security mechanisms and scale well without impacting your application's workload.
Time-limited Access: You can set an expiration time directly on the signed URL, controlling the download window.
Simplicity: Generating signed URLs is relatively straightforward, reducing implementation complexity.
Google Best Practice: Google recommends signed URLs for this use case.
Let's analyze the other options:
I. Email Attachment: This approach is less scalable and secure, especially for larger files compared to a direct download link.
III. Temporary Bucket: Creating and managing temporary buckets adds unnecessary overhead and complexity.
IV. HTTP Cookies: Cookies are designed to store client-side data and aren't ideal for managing file access and security.
Ref:
https://cloud.google.com/storage/docs/access-control/signed-urls
Question 17 Single Choice
You've recently launched a new service on Cloud Run, which authenticates via a custom service and records transactional data in a Cloud Spanner database. Your aim is to validate the application's capability to handle up to 5,000 read and 1,000 write transactions per second while pinpointing any bottlenecks. Additionally, your testing setup should be able to autoscale. What's the optimal approach?
Explanation

Click "Show Answer" to see the explanation here
II. Set up a Google Kubernetes Engine cluster with Locust or JMeter images to dynamically generate load tests. Analyze outcomes with Cloud Trace.
Justification:
Suitable Load Testing Tools: Locust and JMeter are industry-standard load testing tools designed specifically to simulate heavy user traffic and stress test systems. They offer flexibility for defining custom request patterns and simulating realistic user behavior.
Scalability with Kubernetes: GKE provides scalable and self-healing infrastructure for running load testing tools. You can spin up multiple instances of JMeter or Locust pods as needed to achieve the desired load of 5,000 read/1,000 write requests per second.
In-depth Tracing with Cloud Trace: Cloud Trace integrates seamlessly with Cloud Run and Cloud Spanner. It provides detailed visibility into request latencies, error rates, and spans, allowing you to pinpoint bottlenecks at different layers of your system.
Why Other Options Are Less Ideal:
I. Develop a testing framework to generate requests and deploy it on Cloud Run. Analyze VPC Flow Logs using Cloud Logging. Cloud Run can scale during the tests, but building your own framework would be time-consuming. VPC Flow Logs don't provide the granular telemetry needed to identify bottlenecks.
III. Use Cloud Tasks to generate a test load. Schedule 60,000 Cloud Task transactions per minute for 10 minutes using Cloud Scheduler. Review results via Cloud Monitoring. While Cloud Tasks can generate load, they are not specialized load testing tools and lack the flexibility to customize request patterns. Cloud Monitoring on its own may not uncover performance bottlenecks.
IV. Deploy a Compute Engine instance with a LAMP stack image from the Marketplace. Utilize Apache Bench for load testing against the service. Analyze findings with Cloud Trace. Deploying and managing a Compute Engine instance for load testing adds overhead. Apache Bench is not as advanced as Locust or JMeter in terms of customizability.
References:
https://cloud.google.com/architecture/distributed-load-testing-using-gke
Explanation
II. Set up a Google Kubernetes Engine cluster with Locust or JMeter images to dynamically generate load tests. Analyze outcomes with Cloud Trace.
Justification:
Suitable Load Testing Tools: Locust and JMeter are industry-standard load testing tools designed specifically to simulate heavy user traffic and stress test systems. They offer flexibility for defining custom request patterns and simulating realistic user behavior.
Scalability with Kubernetes: GKE provides scalable and self-healing infrastructure for running load testing tools. You can spin up multiple instances of JMeter or Locust pods as needed to achieve the desired load of 5,000 read/1,000 write requests per second.
In-depth Tracing with Cloud Trace: Cloud Trace integrates seamlessly with Cloud Run and Cloud Spanner. It provides detailed visibility into request latencies, error rates, and spans, allowing you to pinpoint bottlenecks at different layers of your system.
Why Other Options Are Less Ideal:
I. Develop a testing framework to generate requests and deploy it on Cloud Run. Analyze VPC Flow Logs using Cloud Logging. Cloud Run can scale during the tests, but building your own framework would be time-consuming. VPC Flow Logs don't provide the granular telemetry needed to identify bottlenecks.
III. Use Cloud Tasks to generate a test load. Schedule 60,000 Cloud Task transactions per minute for 10 minutes using Cloud Scheduler. Review results via Cloud Monitoring. While Cloud Tasks can generate load, they are not specialized load testing tools and lack the flexibility to customize request patterns. Cloud Monitoring on its own may not uncover performance bottlenecks.
IV. Deploy a Compute Engine instance with a LAMP stack image from the Marketplace. Utilize Apache Bench for load testing against the service. Analyze findings with Cloud Trace. Deploying and managing a Compute Engine instance for load testing adds overhead. Apache Bench is not as advanced as Locust or JMeter in terms of customizability.
References:
https://cloud.google.com/architecture/distributed-load-testing-using-gke
Question 18 Single Choice
Your operations team requires a script for listing Cloud Bigtable, Memorystore, and Cloud SQL databases within a project. The script should accommodate user-defined filter expressions to narrow down the presented results. How should you retrieve the data?
Explanation

Click "Show Answer" to see the explanation here
IV. Execute gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list commands. Apply the --filter flag with each command, then showcase the results.
Reasoning:
Efficiency and Simplicity: The
gcloudCLI is the most efficient way to interact with Google Cloud services. These commands directly provide the desired information without complex API interactions.Filtering at the Source: The
--filterflag for each command allows filtering during data retrieval. This is the most efficient approach, reducing processing required within your script.Consistency:
gcloudoffers a consistent interface for these tasks, minimizing script complexity.
Disadvantages of other options:
I & II. Unnecessary Complexity: Using HBase, Redis, and MySQL APIs introduces unnecessary complexity. These APIs are designed for data interaction, not listing databases. Moreover, merging and filtering would add overhead to your script.
III. Filtering Overhead: Retrieving all data then filtering within the application is less efficient. It increases data transfer and puts more processing burden on your script.
Ref:
https://cloud.google.com/sdk/gcloud/reference/topic/filters
Explanation
IV. Execute gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list commands. Apply the --filter flag with each command, then showcase the results.
Reasoning:
Efficiency and Simplicity: The
gcloudCLI is the most efficient way to interact with Google Cloud services. These commands directly provide the desired information without complex API interactions.Filtering at the Source: The
--filterflag for each command allows filtering during data retrieval. This is the most efficient approach, reducing processing required within your script.Consistency:
gcloudoffers a consistent interface for these tasks, minimizing script complexity.
Disadvantages of other options:
I & II. Unnecessary Complexity: Using HBase, Redis, and MySQL APIs introduces unnecessary complexity. These APIs are designed for data interaction, not listing databases. Moreover, merging and filtering would add overhead to your script.
III. Filtering Overhead: Retrieving all data then filtering within the application is less efficient. It increases data transfer and puts more processing burden on your script.
Ref:
https://cloud.google.com/sdk/gcloud/reference/topic/filters
Question 19 Single Choice
Your code, operating on Cloud Functions within project A, aims to write an object to a Cloud Storage bucket owned by project B. Unfortunately, the write operation fails with a "403 Forbidden" error. How can you address this issue effectively?
Explanation

Click "Show Answer" to see the explanation here
III. Allocate the roles/storage.objectCreator role to the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account for the Cloud Storage bucket.
Here's why this is the correct approach:
Service Account Permissions: Cloud Functions execute using a service account. It is this service account that needs the necessary permissions to write to the Cloud Storage bucket in project B.
Principle of Least Privilege: Granting the specific role
roles/storage.objectCreatorprovides the minimal permission required, limiting the potential security impact.Cross-Project Access: This approach addresses the issue of cross-project communication, the core reason why the write operation fails.
Why Other Options are Not Ideal:
I. Grant your user account the roles/storage.objectCreator role ... While this might work, it's overly broad and ties permissions to your personal user account rather than the function's identity.
II. Provide your user account with the roles/iam.serviceAccountUser ... This role grants you the ability to impersonate service accounts, but doesn't directly give the function's service account the needed bucket permissions.
IV. Ensure the Cloud Storage API is enabled for project B This is a prerequisite for any interaction with Cloud Storage, but does not address the specific permissions issue.
Ref:
https://cloud.google.com/functions/docs/concepts/iam#troubleshooting_permission_errors
Explanation
III. Allocate the roles/storage.objectCreator role to the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account for the Cloud Storage bucket.
Here's why this is the correct approach:
Service Account Permissions: Cloud Functions execute using a service account. It is this service account that needs the necessary permissions to write to the Cloud Storage bucket in project B.
Principle of Least Privilege: Granting the specific role
roles/storage.objectCreatorprovides the minimal permission required, limiting the potential security impact.Cross-Project Access: This approach addresses the issue of cross-project communication, the core reason why the write operation fails.
Why Other Options are Not Ideal:
I. Grant your user account the roles/storage.objectCreator role ... While this might work, it's overly broad and ties permissions to your personal user account rather than the function's identity.
II. Provide your user account with the roles/iam.serviceAccountUser ... This role grants you the ability to impersonate service accounts, but doesn't directly give the function's service account the needed bucket permissions.
IV. Ensure the Cloud Storage API is enabled for project B This is a prerequisite for any interaction with Cloud Storage, but does not address the specific permissions issue.
Ref:
https://cloud.google.com/functions/docs/concepts/iam#troubleshooting_permission_errors
Question 20 Single Choice
Your application is live in production, but certain issues only manifest once the new version starts receiving traffic from users. To mitigate both the impact and the number of users affected, which deployment strategy should you employ?
Explanation

Click "Show Answer" to see the explanation here
II. Canary deployment
Here's why:
Traffic Control: Canary deployments allow you to route a small percentage of real user traffic to the new version, while the majority still gets routed to the stable version.
Minimized Impact: Any issues arising in the new version affect only a limited subset of users, reducing the overall impact on your application.
Live Testing: You can carefully monitor the new version under real-world conditions, detecting errors or performance issues before they affect your entire userbase.
Gradual Rollout: If the new version performs well, you can gradually increase the percentage of traffic routed to it, eventually replacing the old version.
Why Other Options Are Less Ideal:
Blue/green deployment: While offering a safe rollback option, blue/green involves switching entirely between versions. This lacks the gradual exposure offered by canary deployments when issues are subtle.
Rolling deployment: This sequentially updates instances, which can still affect a significant portion of users if an issue arises.
Recreate deployment: This has downtime involved and lacks the controlled testing that a canary deployment allows.
Ref:
Explanation
II. Canary deployment
Here's why:
Traffic Control: Canary deployments allow you to route a small percentage of real user traffic to the new version, while the majority still gets routed to the stable version.
Minimized Impact: Any issues arising in the new version affect only a limited subset of users, reducing the overall impact on your application.
Live Testing: You can carefully monitor the new version under real-world conditions, detecting errors or performance issues before they affect your entire userbase.
Gradual Rollout: If the new version performs well, you can gradually increase the percentage of traffic routed to it, eventually replacing the old version.
Why Other Options Are Less Ideal:
Blue/green deployment: While offering a safe rollback option, blue/green involves switching entirely between versions. This lacks the gradual exposure offered by canary deployments when issues are subtle.
Rolling deployment: This sequentially updates instances, which can still affect a significant portion of users if an issue arises.
Recreate deployment: This has downtime involved and lacks the controlled testing that a canary deployment allows.
Ref:



