
AWS Certified Developer - Associate - (DVA-C02) Exam Questions
Total Questions
Last Updated
1st Try Guaranteed

Experts Verified
Question 11 Single Choice
A company runs a legacy application that uses an XML-based SOAP interface. The company needs to expose the functionality of the service to external customers and plans to use Amazon API Gateway.
How can a Developer configure the integration?
Explanation

Click "Show Answer" to see the explanation here
In API Gateway, an API's method request can take a payload in a different format from the corresponding integration request payload, as required in the backend. Similarly, the backend may return an integration response payload different from the method response payload, as expected by the frontend.
API Gateway lets you use mapping templates to map the payload from a method request to the corresponding integration request and from an integration response to the corresponding method response.
CORRECT: "Create a RESTful API using Amazon API Gateway. Transform the incoming JSON into a valid XML message for the SOAP interface using mapping templates" is the correct answer.
INCORRECT: "Create a RESTful API using Amazon API Gateway. Pass the incoming JSON to the SOAP interface through an Application Load Balancer" is incorrect. The API Gateway cannot process the XML SOAP data and cannot pass it through an ALB.
INCORRECT: "Create a SOAP API using Amazon API Gateway. Transform the incoming JSON into a valid XML message for the SOAP interface using AWS Lambda" is incorrect. API Gateway does not support SOAP APIs.
INCORRECT: "Create a SOAP API using Amazon API Gateway. Pass the incoming JSON to the SOAP interface through a Network Load Balancer" is incorrect. API Gateway does not support SOAP APIs.
References:
https://docs.aws.amazon.com/apigateway/latest/Developerguide/request-response-data-mappings.html
Explanation
In API Gateway, an API's method request can take a payload in a different format from the corresponding integration request payload, as required in the backend. Similarly, the backend may return an integration response payload different from the method response payload, as expected by the frontend.
API Gateway lets you use mapping templates to map the payload from a method request to the corresponding integration request and from an integration response to the corresponding method response.
CORRECT: "Create a RESTful API using Amazon API Gateway. Transform the incoming JSON into a valid XML message for the SOAP interface using mapping templates" is the correct answer.
INCORRECT: "Create a RESTful API using Amazon API Gateway. Pass the incoming JSON to the SOAP interface through an Application Load Balancer" is incorrect. The API Gateway cannot process the XML SOAP data and cannot pass it through an ALB.
INCORRECT: "Create a SOAP API using Amazon API Gateway. Transform the incoming JSON into a valid XML message for the SOAP interface using AWS Lambda" is incorrect. API Gateway does not support SOAP APIs.
INCORRECT: "Create a SOAP API using Amazon API Gateway. Pass the incoming JSON to the SOAP interface through a Network Load Balancer" is incorrect. API Gateway does not support SOAP APIs.
References:
https://docs.aws.amazon.com/apigateway/latest/Developerguide/request-response-data-mappings.html
Question 12 Single Choice
A company is using an AWS Step Functions state machine. When testing the state machine errors were experienced in the Step Functions task state machine. To troubleshoot the issue a developer requires that the state input be included along with the error message in the state output.
Which coding practice can preserve both the original input and the error for the state?
Explanation

Click "Show Answer" to see the explanation here
A Step Functions execution receives a JSON text as input and passes that input to the first state in the workflow. Individual states receive JSON as input and usually pass JSON as output to the next state.
In the Amazon States Language, these fields filter and control the flow of JSON from state to state:
• InputPath
• OutputPath
• ResultPath
• Parameters
• ResultSelector
Use ResultPath to combine a task result with task input, or to select one of these. The path you provide to ResultPath controls what information passes to the output. Use ResultPath in a Catch to include the error with the original input, instead of replacing it. The following code is an example of this tactic:

CORRECT: "Use ResultPath in a Catch statement to include the original input with the error" is the correct answer (as explained above.)
INCORRECT: "Use InputPath in a Catch statement to include the original input with the error" is incorrect.
You can use InputPath to select a portion of the state input.
INCORRECT: "Use ErrorEquals in a Retry statement to include the original input with the error" is incorrect.
A retry is used to attempt to retry the process that caused the error based on the retry policy described by ErrorEquals.
INCORRECT: "Use OutputPath in a Retry statement to include the original input with the error" is incorrect.
OutputPath enables you to select a portion of the state output to pass to the next state. This enables you to filter out unwanted information and pass only the portion of JSON that you care about.
References:
https://docs.aws.amazon.com/step-functions/latest/dg/input-output-resultpath.html
https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-handling-error-conditions.html
Explanation
A Step Functions execution receives a JSON text as input and passes that input to the first state in the workflow. Individual states receive JSON as input and usually pass JSON as output to the next state.
In the Amazon States Language, these fields filter and control the flow of JSON from state to state:
• InputPath
• OutputPath
• ResultPath
• Parameters
• ResultSelector
Use ResultPath to combine a task result with task input, or to select one of these. The path you provide to ResultPath controls what information passes to the output. Use ResultPath in a Catch to include the error with the original input, instead of replacing it. The following code is an example of this tactic:

CORRECT: "Use ResultPath in a Catch statement to include the original input with the error" is the correct answer (as explained above.)
INCORRECT: "Use InputPath in a Catch statement to include the original input with the error" is incorrect.
You can use InputPath to select a portion of the state input.
INCORRECT: "Use ErrorEquals in a Retry statement to include the original input with the error" is incorrect.
A retry is used to attempt to retry the process that caused the error based on the retry policy described by ErrorEquals.
INCORRECT: "Use OutputPath in a Retry statement to include the original input with the error" is incorrect.
OutputPath enables you to select a portion of the state output to pass to the next state. This enables you to filter out unwanted information and pass only the portion of JSON that you care about.
References:
https://docs.aws.amazon.com/step-functions/latest/dg/input-output-resultpath.html
https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-handling-error-conditions.html
Question 13 Single Choice
An ecommerce company manages a storefront that uses an Amazon API Gateway API which exposes an AWS Lambda function. The Lambda functions processes orders and stores the orders in an Amazon RDS for MySQL database. The number of transactions increases sporadically during marketing campaigns, and then goes close to zero during quite times.
How can a developer increase the elasticity of the system MOST cost-effectively?
Explanation

Click "Show Answer" to see the explanation here
The most efficient solution would be to use Aurora Auto Scaling and configure the scaling events to happen based on target metric. The metric to use is Average connections of Aurora Replicas which will create a policy based on the average number of connections to Aurora Replicas.
This will ensure that the Aurora replicas scale based on actual numbers of connections to the replicas which will vary based on how busy the storefront is and how many transactions are being processed.
CORRECT: "Migrate from Amazon RDS to Amazon Aurora MySQL. Use an Aurora Auto Scaling policy to scale read replicas based on average connections of Aurora Replicas" is the correct answer (as explained above.)
INCORRECT: "Migrate from Amazon RDS to Amazon Aurora MySQL. Use an Aurora Auto Scaling policy to scale read replicas based on average CPU utilization" is incorrect.
The better metric to use for this situation would be the number of connections to Aurora Replicas as that is the metric that has the closest correlation to the number of transactions being executed.
INCORRECT: "Create an Amazon SNS topic. Publish transactions to the topic configure an SQS queue as a destination. Configure Lambda to process transactions from the queue" is incorrect.
This is highly inefficient. There is no need for an SNS topic in this situation.
INCORRECT: "Create an Amazon SQS queue. Publish transactions to the queue and set the queue to invoke the Lambda function. Set the reserved concurrency of the Lambda function to be equal to the max number of database connections" is incorrect.
This would be less cost effective as you would be paying for the reserved concurrency at all times.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Integrating.AutoScaling.html
Explanation
The most efficient solution would be to use Aurora Auto Scaling and configure the scaling events to happen based on target metric. The metric to use is Average connections of Aurora Replicas which will create a policy based on the average number of connections to Aurora Replicas.
This will ensure that the Aurora replicas scale based on actual numbers of connections to the replicas which will vary based on how busy the storefront is and how many transactions are being processed.
CORRECT: "Migrate from Amazon RDS to Amazon Aurora MySQL. Use an Aurora Auto Scaling policy to scale read replicas based on average connections of Aurora Replicas" is the correct answer (as explained above.)
INCORRECT: "Migrate from Amazon RDS to Amazon Aurora MySQL. Use an Aurora Auto Scaling policy to scale read replicas based on average CPU utilization" is incorrect.
The better metric to use for this situation would be the number of connections to Aurora Replicas as that is the metric that has the closest correlation to the number of transactions being executed.
INCORRECT: "Create an Amazon SNS topic. Publish transactions to the topic configure an SQS queue as a destination. Configure Lambda to process transactions from the queue" is incorrect.
This is highly inefficient. There is no need for an SNS topic in this situation.
INCORRECT: "Create an Amazon SQS queue. Publish transactions to the queue and set the queue to invoke the Lambda function. Set the reserved concurrency of the Lambda function to be equal to the max number of database connections" is incorrect.
This would be less cost effective as you would be paying for the reserved concurrency at all times.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Integrating.AutoScaling.html
Question 14 Single Choice
An application asynchronously invokes an AWS Lambda function. The application has recently been experiencing occasional errors that result in failed invocations. A developer wants to store the messages that resulted in failed invocations such that the application can automatically retry processing them.
What should the developer do to accomplish this goal with the LEAST operational overhead?
Explanation

Click "Show Answer" to see the explanation here
Amazon SQS supports dead-letter queues (DLQ), which other queues (source queues) can target for messages that can't be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate unconsumed messages to determine why their processing doesn't succeed.
The redrive policy specifies the source queue, the dead-letter queue, and the conditions under which Amazon SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.
You can set your DLQ as an event source to the Lambda function to drain your DLQ. This will ensure that all failed invocations are automatically retried.
CORRECT: "Configure a redrive policy on an Amazon SQS queue. Set the dead-letter queue as an event source to the Lambda function" is the correct answer (as explained above.)
INCORRECT: "Configure logging to an Amazon CloudWatch Logs group. Configure Lambda to read failed invocation events from the log group" is incorrect.
The information in the logs may not be sufficient for processing the event. This is not an automated or ideal solution.
INCORRECT: "Configure Amazon EventBridge to send the messages to Amazon SNS to initiate the Lambda function again" is incorrect.
Amazon EventBridge can be configured as a failure destination and can send to SNS. SNS can also be configured with Lambda as a target. However, this solution requires more operational overhead compared to using a DLQ.
INCORRECT: "Configure an Amazon S3 bucket as a destination for failed invocations. Configure event notifications to trigger the Lambda function to process the events" is incorrect.
S3 is not a supported failure destination. Supported destinations are Amazon SNS, Amazon SQS, and Amazon EventBridge.
References:
https://aws.amazon.com/blogs/compute/introducing-aws-lambda-destinations/
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rule-dlq.html
Explanation
Amazon SQS supports dead-letter queues (DLQ), which other queues (source queues) can target for messages that can't be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate unconsumed messages to determine why their processing doesn't succeed.
The redrive policy specifies the source queue, the dead-letter queue, and the conditions under which Amazon SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.
You can set your DLQ as an event source to the Lambda function to drain your DLQ. This will ensure that all failed invocations are automatically retried.
CORRECT: "Configure a redrive policy on an Amazon SQS queue. Set the dead-letter queue as an event source to the Lambda function" is the correct answer (as explained above.)
INCORRECT: "Configure logging to an Amazon CloudWatch Logs group. Configure Lambda to read failed invocation events from the log group" is incorrect.
The information in the logs may not be sufficient for processing the event. This is not an automated or ideal solution.
INCORRECT: "Configure Amazon EventBridge to send the messages to Amazon SNS to initiate the Lambda function again" is incorrect.
Amazon EventBridge can be configured as a failure destination and can send to SNS. SNS can also be configured with Lambda as a target. However, this solution requires more operational overhead compared to using a DLQ.
INCORRECT: "Configure an Amazon S3 bucket as a destination for failed invocations. Configure event notifications to trigger the Lambda function to process the events" is incorrect.
S3 is not a supported failure destination. Supported destinations are Amazon SNS, Amazon SQS, and Amazon EventBridge.
References:
https://aws.amazon.com/blogs/compute/introducing-aws-lambda-destinations/
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rule-dlq.html
Question 15 Single Choice
An application serves customers in several different geographical regions. Information about the location users connect from is written to logs stored in Amazon CloudWatch Logs. The company needs to publish an Amazon CloudWatch custom metric that tracks connections for each location.
Which approach will meet these requirements?
Explanation

Click "Show Answer" to see the explanation here
You can search and filter the log data coming into CloudWatch Logs by creating one or more metric filters. Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on.
When you create a metric from a log filter, you can also choose to assign dimensions and a unit to the metric. In this case, the company can assign a dimension that uses the location information.
CORRECT: "Create a CloudWatch metric filter to extract metrics from the log files with location as a dimension" is the correct answer.
INCORRECT: "Create a CloudWatch Logs Insights query to extract the location information from the logs and to create a custom metric with location as a dimension" is incorrect. You cannot create a custom metric through CloudWatch Logs Insights.
INCORRECT: "Configure a CloudWatch Events rule that creates a custom metric from the CloudWatch Logs group" is incorrect. You cannot create a custom metric using a CloudWatch Events rule.
INCORRECT: "Stream data to an Amazon Elasticsearch cluster in near-real time and export a custom metric" is incorrect. This is not a valid way of creating a custom metric in CloudWatch.
References:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html
Explanation
You can search and filter the log data coming into CloudWatch Logs by creating one or more metric filters. Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on.
When you create a metric from a log filter, you can also choose to assign dimensions and a unit to the metric. In this case, the company can assign a dimension that uses the location information.
CORRECT: "Create a CloudWatch metric filter to extract metrics from the log files with location as a dimension" is the correct answer.
INCORRECT: "Create a CloudWatch Logs Insights query to extract the location information from the logs and to create a custom metric with location as a dimension" is incorrect. You cannot create a custom metric through CloudWatch Logs Insights.
INCORRECT: "Configure a CloudWatch Events rule that creates a custom metric from the CloudWatch Logs group" is incorrect. You cannot create a custom metric using a CloudWatch Events rule.
INCORRECT: "Stream data to an Amazon Elasticsearch cluster in near-real time and export a custom metric" is incorrect. This is not a valid way of creating a custom metric in CloudWatch.
References:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html
Question 16 Single Choice
An organization is hosting a website on an Amazon EC2 instance in a public subnet. The website should allow public access for HTTPS traffic on TCP port 443 but should only accept SSH traffic on TCP port 22 from a corporate address range accessible over a VPN.
Which security group configuration will support both requirements?
Explanation

Click "Show Answer" to see the explanation here
Allowing traffic from 0.0.0.0/0 to port 443 will allow any traffic from the internet to access the website. Limiting the IP address to 192.168.0.0/16 for port 22 will only allow local organizational traffic.
CORRECT: "Allow traffic to port 443 from 0.0.0.0/0 and allow traffic to port 22 from 192.168.0.0/16" is the correct answer (as explained above.)
INCORRECT: “Allow traffic to port 22 from 0.0.0.0/0 and allow traffic to port 443 from 192.168.0.0/16" is incorrect. This will allow traffic from the Internet to port 22 and allow traffic to port 443 from the corporate address block only (192.168.0.0/16).
INCORRECT: "Allow traffic to both port 443 and port 22 from the VPC CIDR block" is incorrect. This would not satisfy either requirement as internet-based users will not be able to access the website and corporate users will not be able to manage the instance via SSH.
INCORRECT: "Allow traffic to both port 443 and port 22 from 0.0.0.0/0 and 192.168.0.0/16" is incorrect. This does not satisfy the requirement to restrict access to port 22 to only the corporate address block.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
Explanation
Allowing traffic from 0.0.0.0/0 to port 443 will allow any traffic from the internet to access the website. Limiting the IP address to 192.168.0.0/16 for port 22 will only allow local organizational traffic.
CORRECT: "Allow traffic to port 443 from 0.0.0.0/0 and allow traffic to port 22 from 192.168.0.0/16" is the correct answer (as explained above.)
INCORRECT: “Allow traffic to port 22 from 0.0.0.0/0 and allow traffic to port 443 from 192.168.0.0/16" is incorrect. This will allow traffic from the Internet to port 22 and allow traffic to port 443 from the corporate address block only (192.168.0.0/16).
INCORRECT: "Allow traffic to both port 443 and port 22 from the VPC CIDR block" is incorrect. This would not satisfy either requirement as internet-based users will not be able to access the website and corporate users will not be able to manage the instance via SSH.
INCORRECT: "Allow traffic to both port 443 and port 22 from 0.0.0.0/0 and 192.168.0.0/16" is incorrect. This does not satisfy the requirement to restrict access to port 22 to only the corporate address block.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
Question 17 Single Choice
A developer is using AWS CodeBuild to build an application into a Docker image. The buildspec file is used to run the application build. The developer needs to push the Docker image to an Amazon ECR repository only upon the successful completion of each build.
Explanation

Click "Show Answer" to see the explanation here
The post_build phase is an optional sequence. It represents the commands, if any, that CodeBuild runs after the build. For example, you might use Maven to package the build artifacts into a JAR or WAR file, or you might push a Docker image into Amazon ECR. Then you might send a build notification through Amazon SNS.
Here is an example of a buildspec file with a post_build phase that pushes a Docker image to Amazon ECR:

CORRECT: "Add a post_build phase to the buildspec file that uses the commands block to push the Docker image" is the correct answer (as explained above.)
INCORRECT: "Add a post_build phase to the buildspec file that uses the finally block to push the Docker image" is incorrect.
Commands specified in a finally block are run after commands in the commands block. The commands in a finally block are run even if a command in the commands block fails. This would not be ideal as this would push the image to ECR even if commands in previous sequences failed.
INCORRECT: "Add an install phase to the buildspec file that uses the commands block to push the Docker image" is incorrect.
These are commands that are run during installation. The develop would want to push the image only after all installations have succeeded. Therefore, the post_build phase should be used.
INCORRECT: "Add a post_build phase to the buildspec file that uses the artifacts sequence to find the build artifacts and push to Amazon ECR" is incorrect.
The artifacts sequence is not required if you are building and pushing a Docker image to Amazon ECR, or you are running unit tests on your source code, but not building it.
References:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html
Explanation
The post_build phase is an optional sequence. It represents the commands, if any, that CodeBuild runs after the build. For example, you might use Maven to package the build artifacts into a JAR or WAR file, or you might push a Docker image into Amazon ECR. Then you might send a build notification through Amazon SNS.
Here is an example of a buildspec file with a post_build phase that pushes a Docker image to Amazon ECR:

CORRECT: "Add a post_build phase to the buildspec file that uses the commands block to push the Docker image" is the correct answer (as explained above.)
INCORRECT: "Add a post_build phase to the buildspec file that uses the finally block to push the Docker image" is incorrect.
Commands specified in a finally block are run after commands in the commands block. The commands in a finally block are run even if a command in the commands block fails. This would not be ideal as this would push the image to ECR even if commands in previous sequences failed.
INCORRECT: "Add an install phase to the buildspec file that uses the commands block to push the Docker image" is incorrect.
These are commands that are run during installation. The develop would want to push the image only after all installations have succeeded. Therefore, the post_build phase should be used.
INCORRECT: "Add a post_build phase to the buildspec file that uses the artifacts sequence to find the build artifacts and push to Amazon ECR" is incorrect.
The artifacts sequence is not required if you are building and pushing a Docker image to Amazon ECR, or you are running unit tests on your source code, but not building it.
References:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html
Question 18 Single Choice
A company has an application that logs all information to Amazon S3. Whenever there is a new log file, an AWS Lambda function is invoked to process the log files. The code works, gathering all of the necessary information. However, when checking the Lambda function logs, duplicate entries with the same request ID are found.
What is the BEST explanation for the duplicate entries?
Explanation

Click "Show Answer" to see the explanation here
From the AWS documentation:
“When an error occurs, your function may be invoked multiple times. Retry behavior varies by error type, client, event source, and invocation type. For example, if you invoke a function asynchronously and it returns an error, Lambda executes the function up to two more times. For more information, see Retry Behavior.
For asynchronous invocation, Lambda adds events to a queue before sending them to your function. If your function does not have enough capacity to keep up with the queue, events may be lost. Occasionally, your function may receive the same event multiple times, even if no error occurs. To retain events that were not processed, configure your function with a dead-letter queue.”
Therefore, the most likely explanation is that the function failed, and Lambda retried the invocation.
CORRECT: "The Lambda function failed, and the Lambda service retried the invocation with a delay" is the correct answer.
INCORRECT: "The S3 bucket name was specified incorrectly" is incorrect. If this was the case all attempts would fail but this is not the case.
INCORRECT: "There was an S3 outage, which caused duplicate entries of the same log file" is incorrect. There cannot be duplicate log files in Amazon S3 as every object must be unique within a bucket. Therefore, if the same log file was uploaded twice it would just overwrite the previous version of the file. Also, if a separate request was made to Lambda it would have a different request ID.
INCORRECT: "The application stopped intermittently and then resumed" is incorrect. The issue is duplicate entries of the same request ID.
References:
https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html
Explanation
From the AWS documentation:
“When an error occurs, your function may be invoked multiple times. Retry behavior varies by error type, client, event source, and invocation type. For example, if you invoke a function asynchronously and it returns an error, Lambda executes the function up to two more times. For more information, see Retry Behavior.
For asynchronous invocation, Lambda adds events to a queue before sending them to your function. If your function does not have enough capacity to keep up with the queue, events may be lost. Occasionally, your function may receive the same event multiple times, even if no error occurs. To retain events that were not processed, configure your function with a dead-letter queue.”
Therefore, the most likely explanation is that the function failed, and Lambda retried the invocation.
CORRECT: "The Lambda function failed, and the Lambda service retried the invocation with a delay" is the correct answer.
INCORRECT: "The S3 bucket name was specified incorrectly" is incorrect. If this was the case all attempts would fail but this is not the case.
INCORRECT: "There was an S3 outage, which caused duplicate entries of the same log file" is incorrect. There cannot be duplicate log files in Amazon S3 as every object must be unique within a bucket. Therefore, if the same log file was uploaded twice it would just overwrite the previous version of the file. Also, if a separate request was made to Lambda it would have a different request ID.
INCORRECT: "The application stopped intermittently and then resumed" is incorrect. The issue is duplicate entries of the same request ID.
References:
https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html
Question 19 Single Choice
An organization handles data that requires high availability in its relational database. The main headquarters for the organization is in Virginia with smaller offices located in California. The main headquarters uses the data more frequently than the smaller offices. How should the developer configure their databases to meet high availability standards?
Explanation

Click "Show Answer" to see the explanation here
Aurora is a relational database that provides high availability by allowing customers to create up to 15 database replications in different Availability Zones. It also allows you to specify which Aurora replica can be promoted to the primary database should the primary database become unavailable. Selecting the AZ that is closest to the main headquarters should not negatively impact the smaller offices but changing the primary database to California could negatively impact the main headquarters.
CORRECT: "Create an Aurora database with the primary database in Virginia and specify the failover to the Aurora replica in another AZ in Virginia" is the correct answer (as explained above.)
INCORRECT: "Create an Aurora database with the primary database in Virginia and specify the failover to the Aurora replica in another AZ in California" is incorrect. It could create some latency issues for the main headquarters in Virginia.
INCORRECT: "Create a DynamoDB database with the primary database in Virginia and specify the failover to the DynamoDB replica in another AZ in Virginia" is incorrect. DynamoDB is not a relational database.
INCORRECT: "Create an Athena database with the primary database in Virginia and specify the failover to the Athena replica in another AZ in Virginia" is incorrect. Athena analyzes data but is not a database service.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.html
Explanation
Aurora is a relational database that provides high availability by allowing customers to create up to 15 database replications in different Availability Zones. It also allows you to specify which Aurora replica can be promoted to the primary database should the primary database become unavailable. Selecting the AZ that is closest to the main headquarters should not negatively impact the smaller offices but changing the primary database to California could negatively impact the main headquarters.
CORRECT: "Create an Aurora database with the primary database in Virginia and specify the failover to the Aurora replica in another AZ in Virginia" is the correct answer (as explained above.)
INCORRECT: "Create an Aurora database with the primary database in Virginia and specify the failover to the Aurora replica in another AZ in California" is incorrect. It could create some latency issues for the main headquarters in Virginia.
INCORRECT: "Create a DynamoDB database with the primary database in Virginia and specify the failover to the DynamoDB replica in another AZ in Virginia" is incorrect. DynamoDB is not a relational database.
INCORRECT: "Create an Athena database with the primary database in Virginia and specify the failover to the Athena replica in another AZ in Virginia" is incorrect. Athena analyzes data but is not a database service.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.html
Question 20 Single Choice
A business operates a web app on Amazon EC2 instances utilizing a bespoke Amazon Machine Image (AMI). They employ AWS CloudFormation for deploying their app, which is currently active in the us-east-1 Region. However, their goal is to extend the deployment to the us-west-1 Region.
During an initial attempt to create an AWS CloudFormation stack in us-west-1, the action fails, and an error message indicates that the AMI ID does not exist. A developer is tasked with addressing this error through a method that minimizes operational complexity.
Which action should the developer take?
Explanation

Click "Show Answer" to see the explanation here
This is the best option as it allows the developer to use the same AMI in a different region with minimal effort and maintenance.
CORRECT: "Copy the AMI from the us-east-1 Region to the us-west-1 Region and use the new AMI ID in the CloudFormation template" is the correct answer (as explained above.)
INCORRECT: "Create a new AMI in the us-west-1 Region and update the CloudFormation template with the new AMI ID" is incorrect.
This is incorrect as creating a new AMI would be operationally complex and time-consuming.
INCORRECT: "Modify the CloudFormation template to refer to the AMI in us-east-1 Region" is incorrect.
AMIs are regional resources and cannot be used directly in other regions.
INCORRECT: "Use AWS Lambda to create an AMI in the us-west-1 Region during stack creation" is incorrect.
This process would add unnecessary complexity and the new AMI would not be identical to the original one.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html
Explanation
This is the best option as it allows the developer to use the same AMI in a different region with minimal effort and maintenance.
CORRECT: "Copy the AMI from the us-east-1 Region to the us-west-1 Region and use the new AMI ID in the CloudFormation template" is the correct answer (as explained above.)
INCORRECT: "Create a new AMI in the us-west-1 Region and update the CloudFormation template with the new AMI ID" is incorrect.
This is incorrect as creating a new AMI would be operationally complex and time-consuming.
INCORRECT: "Modify the CloudFormation template to refer to the AMI in us-east-1 Region" is incorrect.
AMIs are regional resources and cannot be used directly in other regions.
INCORRECT: "Use AWS Lambda to create an AMI in the us-west-1 Region during stack creation" is incorrect.
This process would add unnecessary complexity and the new AMI would not be identical to the original one.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html



