
AWS Certified Security - Specialty - (SCS-C02) Exam Questions
Total Questions
Last Updated
1st Try Guaranteed

Experts Verified
Question 1 Single Choice
A data analytics company uses Amazon GuardDuty to identify unexpected, potentially unauthorized, and malicious activity within its AWS environment. The security team at the company wants all Medium/High Severity findings to automatically generate a ticket in a third-party ticketing system through email integration.
As an AWS Certified Security Specialist, what would you suggest as the most optimal solution?
Explanation

Click "Show Answer" to see the explanation here
Correct option:
Create an Amazon EventBridge rule that includes an event pattern that matches Medium/High severity GuardDuty findings. Set up an Amazon Simple Notification Service (Amazon SNS) topic. Configure the third-party ticketing email system as a subscriber to the SNS topic. Set the SNS topic as the target for the EventBridge rule
Amazon GuardDuty is a security monitoring service that analyzes and processes Foundational data sources, such as AWS CloudTrail management events, AWS CloudTrail event logs, VPC flow logs (from Amazon EC2 instances), and DNS logs.
GuardDuty informs you of the status of your AWS environment by producing security findings that you can view in the GuardDuty console or through Amazon EventBridge.
EventBridge is a serverless service that uses events to connect application components together, making it easier for you to build scalable event-driven applications. Event-driven architecture is a style of building loosely coupled software systems that work together by emitting and responding to events. Event-driven architecture can help you boost agility and build reliable, scalable applications. An Eventbridge target is a resource or endpoint that EventBridge sends an event to when the event matches the event pattern defined for a rule. The rule processes the event data and sends the pertinent information to the target. To deliver event data to a target, EventBridge needs permission to access the target resource. You can define up to five targets for each rule.
For the given use case, you can use a custom event pattern with the EventBridge rule to match Medium/High severity GuardDuty findings. Then, route the response to an Amazon Simple Notification Service (Amazon SNS) topic. You also need to set the third-party ticketing email system as a subscriber to the given SNS topic.
Incorrect options:
Leverage the GuardDuty CreateFilter API operation to set up a filter in GuardDuty to monitor for Medium/High severity findings. Set up an SES endpoint as the target for the GuardDuty CreateFilter API so that SES can send out an email to the third-party ticketing email system -
Leverage the GuardDuty CreateFilter API operation to set up a filter in GuardDuty to monitor for Medium/High severity findings. Set up an Amazon Simple Notification Service (Amazon SNS) topic. Configure the third-party ticketing email system as a subscriber to the SNS topic. Set the SNS topic as the target for the GuardDuty CreateFilter API
GuardDuty CreateFilter API creates a filter using the specified finding criteria. If the action is successful, the service sends back an HTTP 200 response along with the name of the successfully created filter. There is no such thing as setting a target for the GuardDuty CreateFilter API. So both these options are incorrect.
Create an Amazon EventBridge rule that includes an event pattern that matches Medium/High severity GuardDuty findings. Set up an SES endpoint as the target for the EventBridge rule so that SES can send out an email to the third-party ticketing email system - Eventbridge does not support Amazon SES endpoint as a target, so this option is incorrect.
References:
https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html
https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings_cloudwatch.html
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-targets.html
https://docs.aws.amazon.com/guardduty/latest/APIReference/API_CreateFilter.html
https://repost.aws/knowledge-center/guardduty-eventbridge-sns-rule
Explanation
Correct option:
Create an Amazon EventBridge rule that includes an event pattern that matches Medium/High severity GuardDuty findings. Set up an Amazon Simple Notification Service (Amazon SNS) topic. Configure the third-party ticketing email system as a subscriber to the SNS topic. Set the SNS topic as the target for the EventBridge rule
Amazon GuardDuty is a security monitoring service that analyzes and processes Foundational data sources, such as AWS CloudTrail management events, AWS CloudTrail event logs, VPC flow logs (from Amazon EC2 instances), and DNS logs.
GuardDuty informs you of the status of your AWS environment by producing security findings that you can view in the GuardDuty console or through Amazon EventBridge.
EventBridge is a serverless service that uses events to connect application components together, making it easier for you to build scalable event-driven applications. Event-driven architecture is a style of building loosely coupled software systems that work together by emitting and responding to events. Event-driven architecture can help you boost agility and build reliable, scalable applications. An Eventbridge target is a resource or endpoint that EventBridge sends an event to when the event matches the event pattern defined for a rule. The rule processes the event data and sends the pertinent information to the target. To deliver event data to a target, EventBridge needs permission to access the target resource. You can define up to five targets for each rule.
For the given use case, you can use a custom event pattern with the EventBridge rule to match Medium/High severity GuardDuty findings. Then, route the response to an Amazon Simple Notification Service (Amazon SNS) topic. You also need to set the third-party ticketing email system as a subscriber to the given SNS topic.
Incorrect options:
Leverage the GuardDuty CreateFilter API operation to set up a filter in GuardDuty to monitor for Medium/High severity findings. Set up an SES endpoint as the target for the GuardDuty CreateFilter API so that SES can send out an email to the third-party ticketing email system -
Leverage the GuardDuty CreateFilter API operation to set up a filter in GuardDuty to monitor for Medium/High severity findings. Set up an Amazon Simple Notification Service (Amazon SNS) topic. Configure the third-party ticketing email system as a subscriber to the SNS topic. Set the SNS topic as the target for the GuardDuty CreateFilter API
GuardDuty CreateFilter API creates a filter using the specified finding criteria. If the action is successful, the service sends back an HTTP 200 response along with the name of the successfully created filter. There is no such thing as setting a target for the GuardDuty CreateFilter API. So both these options are incorrect.
Create an Amazon EventBridge rule that includes an event pattern that matches Medium/High severity GuardDuty findings. Set up an SES endpoint as the target for the EventBridge rule so that SES can send out an email to the third-party ticketing email system - Eventbridge does not support Amazon SES endpoint as a target, so this option is incorrect.
References:
https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html
https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings_cloudwatch.html
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-targets.html
https://docs.aws.amazon.com/guardduty/latest/APIReference/API_CreateFilter.html
https://repost.aws/knowledge-center/guardduty-eventbridge-sns-rule
Question 2 Single Choice
A company exposes most of its business functions as container applications and utilizes Amazon Elastic Container Registry (Amazon ECR) service for managing the container images. To strengthen the security backbone of its AWS architecture, the company is looking for a solution that provides automatic scanning of operating systems and programming language package vulnerabilities. All the images pushed to Amazon ECR should be continuously scanned and the updates of the scan should be notified to specified teams.
Which solution is the right fit for this requirement?
Explanation

Click "Show Answer" to see the explanation here
Correct option:
Turn on enhanced scanning for your Amazon ECR registry. By default, the duration of the scans is set to Lifetime. When enhanced scanning is turned on, Amazon ECR sends scan events to EventBridge which can be configured for further notifications to specified teams
Amazon ECR enhanced scanning is an integration with Amazon Inspector which provides vulnerability scanning for your container images. Your container images are scanned for both operating systems and programming language package vulnerabilities. You can view the scan findings with both Amazon ECR and Amazon Inspector directly.
Amazon Inspector supports configuring the duration that your private repositories are continuously monitored for. By default, when enhanced scanning is turned on for your Amazon ECR private registry, the Amazon Inspector service continually monitors your repositories until either the image is deleted or enhanced scanning is disabled. The duration that Amazon Inspector scans your images can be changed using the Amazon Inspector settings. The available scan durations are Lifetime (default), 180 days, and 30 days. When the scan duration for a repository elapses, the scan status of SCAN_ELIGIBILITY_EXPIRED is displayed when listing your scan vulnerabilities.
If you want to know more about ECR-enhanced scanning, some key considerations are given below:
Considerations for enhanced scanning:
via - https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-enhanced.html
Incorrect options:
Turn on basic scanning for your Amazon ECR registry. Configure the repository to scan on push, so every new image pushed is scanned immediately when uploaded to the registry. Amazon ECR is integrated with Amazon EventBridge and sends events to EventBridge about image scan updates. Configure EventBridge to send notifications to specified teams - Amazon ECR provides a basic scanning type that uses the Common Vulnerabilities and Exposures (CVEs) database from the open-source Clair project. Programing language package vulnerabilities are not scanned in basic type.
Turn on Amazon GuardDuty advanced findings to register for vulnerability checks of the Amazon ECR registry. GuardDuty's integration with AWS Security Hub can be used to send notifications to the concerned teams when findings are raised in GuardDuty against the ECR registry - Amazon GuardDuty is a security monitoring service that analyzes and processes Foundational data sources, such as AWS CloudTrail management events, AWS CloudTrail event logs, VPC flow logs (from Amazon EC2 instances), and DNS logs. This option is given only as a distractor. GuardDuty has no provision for checking against the Amazon ECR registry for vulnerabilities.
Turn on AWS Trusted Advisor checks for the Amazon ECR registry. When AWS Trusted Advisor refreshes your checks, Trusted Advisor publishes metrics about your results to CloudWatch. Create CloudWatch alarms to be notified of these changes - This option is given only as a distractor. AWS Trusted Advisor has no checks for the Amazon ECR registry.
Reference:
https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-enhanced.html
Explanation
Correct option:
Turn on enhanced scanning for your Amazon ECR registry. By default, the duration of the scans is set to Lifetime. When enhanced scanning is turned on, Amazon ECR sends scan events to EventBridge which can be configured for further notifications to specified teams
Amazon ECR enhanced scanning is an integration with Amazon Inspector which provides vulnerability scanning for your container images. Your container images are scanned for both operating systems and programming language package vulnerabilities. You can view the scan findings with both Amazon ECR and Amazon Inspector directly.
Amazon Inspector supports configuring the duration that your private repositories are continuously monitored for. By default, when enhanced scanning is turned on for your Amazon ECR private registry, the Amazon Inspector service continually monitors your repositories until either the image is deleted or enhanced scanning is disabled. The duration that Amazon Inspector scans your images can be changed using the Amazon Inspector settings. The available scan durations are Lifetime (default), 180 days, and 30 days. When the scan duration for a repository elapses, the scan status of SCAN_ELIGIBILITY_EXPIRED is displayed when listing your scan vulnerabilities.
If you want to know more about ECR-enhanced scanning, some key considerations are given below:
Considerations for enhanced scanning:
via - https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-enhanced.html
Incorrect options:
Turn on basic scanning for your Amazon ECR registry. Configure the repository to scan on push, so every new image pushed is scanned immediately when uploaded to the registry. Amazon ECR is integrated with Amazon EventBridge and sends events to EventBridge about image scan updates. Configure EventBridge to send notifications to specified teams - Amazon ECR provides a basic scanning type that uses the Common Vulnerabilities and Exposures (CVEs) database from the open-source Clair project. Programing language package vulnerabilities are not scanned in basic type.
Turn on Amazon GuardDuty advanced findings to register for vulnerability checks of the Amazon ECR registry. GuardDuty's integration with AWS Security Hub can be used to send notifications to the concerned teams when findings are raised in GuardDuty against the ECR registry - Amazon GuardDuty is a security monitoring service that analyzes and processes Foundational data sources, such as AWS CloudTrail management events, AWS CloudTrail event logs, VPC flow logs (from Amazon EC2 instances), and DNS logs. This option is given only as a distractor. GuardDuty has no provision for checking against the Amazon ECR registry for vulnerabilities.
Turn on AWS Trusted Advisor checks for the Amazon ECR registry. When AWS Trusted Advisor refreshes your checks, Trusted Advisor publishes metrics about your results to CloudWatch. Create CloudWatch alarms to be notified of these changes - This option is given only as a distractor. AWS Trusted Advisor has no checks for the Amazon ECR registry.
Reference:
https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-enhanced.html
Question 3 Single Choice
An e-commerce company's security team needs to receive a notification whenever an AWS access key has not been rotated in 30 or more days. You have been hired as an AWS Certified Security Specialist to develop a solution that provides these notifications automatically.
Which solution will you recommend to address these requirements with the LEAST effort?
Explanation

Click "Show Answer" to see the explanation here
Correct option:
Enable the AWS Config access-keys-rotated managed rule and configure the maxAccessKeyAge parameter to 30 days. Have AWS Config apply remediation using the AWS Systems Manager Automation document for every non-compliant resource. The Automation document, in turn, publishes a customized message to an SNS topic
AWS Config provides AWS-managed rules, which are predefined, customizable rules that AWS Config uses to evaluate whether your AWS resources comply with common best practices. You can customize the behavior of a managed rule to suit your needs. For example, you can define the rule's scope to constrain which resources trigger an evaluation for the rule, such as EC2 instances or volumes. You can customize the rule's parameters to define attributes that your resources must have to comply with the rule. For example, you can customize a parameter to specify that your security group should block incoming traffic to a specific port number.
The access-keys-rotated managed rule checks if active IAM access keys are rotated (changed) within the number of days specified in maxAccessKeyAge. The rule is NON_COMPLIANT if access keys are not rotated within the specified time period. The default value is 90 days. For the given use case, you need to set the maxAccessKeyAge value to 30.
For the given use case, the optimal solution is to use AWS Config’s automatic remediation feature. The remediation is declared in an AWS Systems Manager automation document, which is invoked by AWS Config when a resource is found to be non-compliant. As its name implies, these steps are meant to remediate non-compliant resources, but in our case, we use them as part of a notification system.
Incorrect options:
Enable the AWS Config access-keys-rotated managed rule and configure the maxAccessKeyAge parameter to 30 days. Have AWS Config apply remediation using an AWS Lambda function for every non-compliant resource. The Lambda function, in turn, publishes a customized message to an SNS topic - AWS Config cannot apply remediation using an AWS Lambda function, so this option is incorrect.
Enable the AWS Config access-keys-rotated managed rule and configure the maxAccessKeyAge parameter to 30 days. Create an Amazon EventBridge rule that runs on a daily schedule to trigger an AWS Lambda function that executes the AWS Config managed rule. Publish a customized message to an SNS topic when the Lambda function detects non-compliance - There is no need to configure an AWS Lambda function to execute an AWS Config managed rule, as AWS Config itself automatically executes the managed rules. So this option is incorrect.
Configure AWS Trusted Advisor to apply remediation using an AWS Lambda function for every non-compliant AWS access key. The Lambda function, in turn, publishes a customized message to an SNS topic - You cannot use AWS Trusted Advisor to apply remediation using an AWS Lambda function for any non-compliant AWS access key for the given use case.
References:
https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html
https://aws.amazon.com/blogs/mt/managing-aged-access-keys-through-aws-config-remediations/
https://docs.aws.amazon.com/config/latest/developerguide/access-keys-rotated.html
Explanation
Correct option:
Enable the AWS Config access-keys-rotated managed rule and configure the maxAccessKeyAge parameter to 30 days. Have AWS Config apply remediation using the AWS Systems Manager Automation document for every non-compliant resource. The Automation document, in turn, publishes a customized message to an SNS topic
AWS Config provides AWS-managed rules, which are predefined, customizable rules that AWS Config uses to evaluate whether your AWS resources comply with common best practices. You can customize the behavior of a managed rule to suit your needs. For example, you can define the rule's scope to constrain which resources trigger an evaluation for the rule, such as EC2 instances or volumes. You can customize the rule's parameters to define attributes that your resources must have to comply with the rule. For example, you can customize a parameter to specify that your security group should block incoming traffic to a specific port number.
The access-keys-rotated managed rule checks if active IAM access keys are rotated (changed) within the number of days specified in maxAccessKeyAge. The rule is NON_COMPLIANT if access keys are not rotated within the specified time period. The default value is 90 days. For the given use case, you need to set the maxAccessKeyAge value to 30.
For the given use case, the optimal solution is to use AWS Config’s automatic remediation feature. The remediation is declared in an AWS Systems Manager automation document, which is invoked by AWS Config when a resource is found to be non-compliant. As its name implies, these steps are meant to remediate non-compliant resources, but in our case, we use them as part of a notification system.
Incorrect options:
Enable the AWS Config access-keys-rotated managed rule and configure the maxAccessKeyAge parameter to 30 days. Have AWS Config apply remediation using an AWS Lambda function for every non-compliant resource. The Lambda function, in turn, publishes a customized message to an SNS topic - AWS Config cannot apply remediation using an AWS Lambda function, so this option is incorrect.
Enable the AWS Config access-keys-rotated managed rule and configure the maxAccessKeyAge parameter to 30 days. Create an Amazon EventBridge rule that runs on a daily schedule to trigger an AWS Lambda function that executes the AWS Config managed rule. Publish a customized message to an SNS topic when the Lambda function detects non-compliance - There is no need to configure an AWS Lambda function to execute an AWS Config managed rule, as AWS Config itself automatically executes the managed rules. So this option is incorrect.
Configure AWS Trusted Advisor to apply remediation using an AWS Lambda function for every non-compliant AWS access key. The Lambda function, in turn, publishes a customized message to an SNS topic - You cannot use AWS Trusted Advisor to apply remediation using an AWS Lambda function for any non-compliant AWS access key for the given use case.
References:
https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html
https://aws.amazon.com/blogs/mt/managing-aged-access-keys-through-aws-config-remediations/
https://docs.aws.amazon.com/config/latest/developerguide/access-keys-rotated.html
Question 4 Single Choice
A company has two VPCs (VPC1 and VPC2) configured in two different AWS Regions that are part of the same AWS account. There is an active VPC peering connection between the VPCs that has been configured in the route tables for both VPCs.
The database is present in VPC1 and the access to the database instance is controlled through a security group defined in VPC1. VPC2 consists of an Auto Scaling group that scales in/out any Amazon EC2 instances based on the CPU usage. Each instance launched as part of the Auto Scaling group belongs to a security group defined specifically for the Auto Scaling group. The launched instances need seamless access to the database instance present in VPC1.
Which additional step is needed for the solution to work if the route tables are already configured for VPC peering?
Explanation

Click "Show Answer" to see the explanation here
Correct option:
Add an inbound rule to the security group of the database instance in VPC1, with the source as the CIDR block of VPC2 (VPC for the instances launched by the Auto Scaling Group)
You cannot reference the security group of a peer VPC that's in a different AWS Region. Instead, use the CIDR block of the peer VPC.
Rules to update your security groups to reference peer security groups:
via - https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html
Incorrect options:
Configure an outbound rule on the security group of the instances launched in the Auto Scaling Group in VPC2, with the destination as the ID of the security group of the database instance
Configure an outbound rule on the security group of the instances launched in the Auto Scaling Group in VPC2, with the destination as the CIDR block of VPC1 (VPC for the database instance)
By default, security groups contain outbound rules that allow all outbound traffic. So, both these options just act as distractors.
Add an inbound rule to the security group of the database instance in VPC1, with the source as the ID of the security group of the instances launched in the Auto Scaling Group in VPC2 - You cannot reference the security group of a peer VPC that's in a different AWS Region. This option would be correct if both the VPCs belonged to the same AWS region.
References:
https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html
https://docs.aws.amazon.com/devicefarm/latest/developerguide/amazon-vpc-cross-region.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules.html
Explanation
Correct option:
Add an inbound rule to the security group of the database instance in VPC1, with the source as the CIDR block of VPC2 (VPC for the instances launched by the Auto Scaling Group)
You cannot reference the security group of a peer VPC that's in a different AWS Region. Instead, use the CIDR block of the peer VPC.
Rules to update your security groups to reference peer security groups:
via - https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html
Incorrect options:
Configure an outbound rule on the security group of the instances launched in the Auto Scaling Group in VPC2, with the destination as the ID of the security group of the database instance
Configure an outbound rule on the security group of the instances launched in the Auto Scaling Group in VPC2, with the destination as the CIDR block of VPC1 (VPC for the database instance)
By default, security groups contain outbound rules that allow all outbound traffic. So, both these options just act as distractors.
Add an inbound rule to the security group of the database instance in VPC1, with the source as the ID of the security group of the instances launched in the Auto Scaling Group in VPC2 - You cannot reference the security group of a peer VPC that's in a different AWS Region. This option would be correct if both the VPCs belonged to the same AWS region.
References:
https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html
https://docs.aws.amazon.com/devicefarm/latest/developerguide/amazon-vpc-cross-region.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules.html
Question 5 Single Choice
A company wants to allow its developers to create temporary environments to test their code using the latest Amazon Linux distribution. To control costs, the company wants the teams to create Amazon EC2 instances using only small instance types while also restricting the size of the attached EBS volumes. To comply with security requirements, the developers are expected to create only encrypted volumes and use a non-standard port for secure shell access to the instances.
What is the most optimal way to proactively evaluate resource configurations in CloudFormation templates without writing custom code in Python or other languages?
Explanation

Click "Show Answer" to see the explanation here
Correct option:
Use AWS CloudFormation Guard (cfn-guard), an open-source tool that helps you write compliance rules and validate the CloudFormation templates against those rules
CloudFormation Guard (cfn-guard) is an open-source tool that helps you write compliance rules using a simple, policy as code language. It will help you validate CloudFormation templates against those rules to keep your AWS resources in compliance with your company policy guidelines. You can use cfn-guard to evaluate templates locally as you write them and after you submit them to be deployed in your CI/CD pipelines.
Although similar tools exist to create custom compliance rules, such as cfn-nag, cfripper, and checkov, cfn-guard uses a domain-specific language (DSL) to write rules. Learning the rule language is easier than learning a programming language like Python or Ruby, which is required to make custom rules in similar tools. Because cfn-guard is written in Rust, it can be compiled to a native binary to evaluate thousands of rules across templates.
CloudFormation Guard Rule Format:
via - https://aws.amazon.com/blogs/mt/write-preventive-compliance-rules-for-aws-cloudformation-templates-the-cfn-guard-way/
Incorrect options:
Use AWS CloudFormation Linter (cfn-lint), an open-source tool that you can use to perform detailed validation on your AWS CloudFormation templates - AWS CloudFormation Guard doesn't validate CloudFormation templates for valid syntax or allowed property values. You use the cfn-lint tool to perform a thorough inspection of the template structure. Validating syntax is necessary but not sufficient for the given requirement, hence this is an incorrect option.
Use AWS CloudFormation Drift Detection to understand the difference between the expected configuration values of stack resources defined in CloudFormation templates and the actual configuration values of these resources in the corresponding CloudFormation stacks - Drift is the difference between the expected configuration values of stack resources defined in CloudFormation templates and the actual configuration values of these resources in the corresponding CloudFormation stacks. This allows you to better manage your CloudFormation stacks and ensure consistency in your resource configurations.
Use AWS Cloud Development Kit (AWS CDK), an open-source tool that helps you write compliance rules and validate the CloudFormation templates against those rules - AWS Cloud Development Kit (AWS CDK) is a framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
References:
Explanation
Correct option:
Use AWS CloudFormation Guard (cfn-guard), an open-source tool that helps you write compliance rules and validate the CloudFormation templates against those rules
CloudFormation Guard (cfn-guard) is an open-source tool that helps you write compliance rules using a simple, policy as code language. It will help you validate CloudFormation templates against those rules to keep your AWS resources in compliance with your company policy guidelines. You can use cfn-guard to evaluate templates locally as you write them and after you submit them to be deployed in your CI/CD pipelines.
Although similar tools exist to create custom compliance rules, such as cfn-nag, cfripper, and checkov, cfn-guard uses a domain-specific language (DSL) to write rules. Learning the rule language is easier than learning a programming language like Python or Ruby, which is required to make custom rules in similar tools. Because cfn-guard is written in Rust, it can be compiled to a native binary to evaluate thousands of rules across templates.
CloudFormation Guard Rule Format:
via - https://aws.amazon.com/blogs/mt/write-preventive-compliance-rules-for-aws-cloudformation-templates-the-cfn-guard-way/
Incorrect options:
Use AWS CloudFormation Linter (cfn-lint), an open-source tool that you can use to perform detailed validation on your AWS CloudFormation templates - AWS CloudFormation Guard doesn't validate CloudFormation templates for valid syntax or allowed property values. You use the cfn-lint tool to perform a thorough inspection of the template structure. Validating syntax is necessary but not sufficient for the given requirement, hence this is an incorrect option.
Use AWS CloudFormation Drift Detection to understand the difference between the expected configuration values of stack resources defined in CloudFormation templates and the actual configuration values of these resources in the corresponding CloudFormation stacks - Drift is the difference between the expected configuration values of stack resources defined in CloudFormation templates and the actual configuration values of these resources in the corresponding CloudFormation stacks. This allows you to better manage your CloudFormation stacks and ensure consistency in your resource configurations.
Use AWS Cloud Development Kit (AWS CDK), an open-source tool that helps you write compliance rules and validate the CloudFormation templates against those rules - AWS Cloud Development Kit (AWS CDK) is a framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
References:
Question 6 Single Choice
A company maintains independent AWS accounts for its departments. For a specific requirement, a user in the Finance account needs full access to an Amazon S3 bucket in the Audit account. The security administrator has attached the necessary IAM permissions to the user of the Finance account. But, the user still has no access to the S3 bucket.
Which additional configuration is needed for the given requirement?
Explanation

Click "Show Answer" to see the explanation here
Correct option:
Create an S3 bucket policy in the Audit account that allows access to the S3 bucket for the user from the Finance account
Depending on the type of access that you want to provide, use one of the following solutions to grant cross-account access to objects:
AWS Identity and Access Management (IAM) policies and resource-based bucket policies for programmatic-only access to S3 bucket objects.
IAM policies and resource-based Access Control Lists (ACLs) for programmatic-only access to S3 bucket objects.
Cross-account IAM roles for programmatic and console access to S3 bucket objects.
While necessary permissions are needed for an IAM user or an IAM role to connect to the Amazon S3 bucket, it is not sufficient. The bucket policy of the Amazon S3 bucket should also allow access to the user or role for successful access to the data present in the S3 buckets.
Following is an example bucket policy for Account A to grant permissions to the IAM role or user that you created in Account B. Use this bucket policy to grant a user the permissions to GetObject and PutObject for objects in a bucket owned by Account A.
Example bucket policy for cross-account access:
via - https://repost.aws/knowledge-center/cross-account-access-s3
Incorrect options:
Enable the bucket owner enforced setting in the Audit account. Use Access Control Lists (ACLs) to grant cross-account access - This statement is incorrect. When the bucket owner enforced setting is enabled, all bucket and object ACLs are disabled. Therefore, you can't use ACLs to grant cross-account access. By default, all newly created buckets have the bucket owner enforced setting enabled.
Create an S3 bucket policy in the Finance account that allows access to the S3 bucket for the user from the Finance account - The S3 bucket is in the Audit account and hence creating the bucket policy in the Finance account does not make sense. So this option is incorrect.
Configure S3 bucket ARN as Principal for the IAM trust policy for the user - You cannot add S3 bucket ARN as a Principal in an IAM trust policy. You can specify any of the following principals in a policy: AWS account and root user, IAM roles, Role sessions, IAM users, Federated user sessions, AWS services, or All principals.
References:
https://repost.aws/knowledge-center/cross-account-access-s3
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html
Explanation
Correct option:
Create an S3 bucket policy in the Audit account that allows access to the S3 bucket for the user from the Finance account
Depending on the type of access that you want to provide, use one of the following solutions to grant cross-account access to objects:
AWS Identity and Access Management (IAM) policies and resource-based bucket policies for programmatic-only access to S3 bucket objects.
IAM policies and resource-based Access Control Lists (ACLs) for programmatic-only access to S3 bucket objects.
Cross-account IAM roles for programmatic and console access to S3 bucket objects.
While necessary permissions are needed for an IAM user or an IAM role to connect to the Amazon S3 bucket, it is not sufficient. The bucket policy of the Amazon S3 bucket should also allow access to the user or role for successful access to the data present in the S3 buckets.
Following is an example bucket policy for Account A to grant permissions to the IAM role or user that you created in Account B. Use this bucket policy to grant a user the permissions to GetObject and PutObject for objects in a bucket owned by Account A.
Example bucket policy for cross-account access:
via - https://repost.aws/knowledge-center/cross-account-access-s3
Incorrect options:
Enable the bucket owner enforced setting in the Audit account. Use Access Control Lists (ACLs) to grant cross-account access - This statement is incorrect. When the bucket owner enforced setting is enabled, all bucket and object ACLs are disabled. Therefore, you can't use ACLs to grant cross-account access. By default, all newly created buckets have the bucket owner enforced setting enabled.
Create an S3 bucket policy in the Finance account that allows access to the S3 bucket for the user from the Finance account - The S3 bucket is in the Audit account and hence creating the bucket policy in the Finance account does not make sense. So this option is incorrect.
Configure S3 bucket ARN as Principal for the IAM trust policy for the user - You cannot add S3 bucket ARN as a Principal in an IAM trust policy. You can specify any of the following principals in a policy: AWS account and root user, IAM roles, Role sessions, IAM users, Federated user sessions, AWS services, or All principals.
References:
https://repost.aws/knowledge-center/cross-account-access-s3
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html
Question 7 Single Choice
An open banking system enables secure open API integrations for financial institutions. The banking system needs mutual TLS (mTLS) authentication as part of its security standards. The application will be hosted on an Amazon EC2 server. The system has specific security compliance rules that need the server to terminate the client’s TLS connection.
As a Security Engineer, how will you configure this requirement to support mTLS if a load balancing service is needed for the instances?
Explanation

Click "Show Answer" to see the explanation here
Correct option:
Create a TCP listener using a Network Load Balancer and implement mTLS on the target
By default, the TLS protocol only requires a server to authenticate itself to the client. The authentication of the client to the server is managed by the application layer. The TLS protocol also offers the ability for the server to request that the client send an X.509 certificate to prove its identity. This is called mutual TLS (mTLS) as both parties are authenticated via certificates with TLS.
For mTLS support, you need to create a TCP listener using a Network Load Balancer or a Classic Load Balancer and implement mTLS on the target. The load balancer passes the request through as is, so you can implement mTLS on the target.
Incorrect options:
Configure an mTLS listener on an Application Load Balancer and enable mutual TLS authentication for better security of the application - Application Load Balancer supports mTLS. If you have an HTTPS application, AWS recommends you consider ALB if you’d like to perform application level routing. For example, performing weighted round robin load balancing for HTTPS requests, which will allow you to create blue/green style deployments. ALB will also allow you to offload the TLS/mTLS operations. Since the ALB terminates client’s TLS session, you’ll need to upload certificates for the ALB. NLB, on the other hand, operates at the transport layer (layer 4 of the OSI model), and provides low latency load balancing of TCP/UDP connections. For an HTTPS application, AWS recommends using Network Load Balancer (NLB) if you have specific security compliance rules that need the server to terminate the client’s TLS connection.
For the given use case, you cannot configure ALB such that the server terminates the client’s TLS connection, therefore this option is incorrect.
Network Load Balancers support TLS renegotiation and mutual TLS authentication (mTLS). Configure a TLS listener for a Network Load Balancer to use mutual TLS authentication - Network Load Balancers do not support TLS renegotiation or mutual TLS authentication (mTLS). For mTLS support, you need to create a TCP listener instead of a TLS listener.
Configure a TCP listener on an Application Load Balancer and enable mutual TLS authentication on it - Application Load Balancer only supports HTTP and HTTPS protocols. Network Load Balancer is used for TCP traffic.
References:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/mutual-authentication.html
Explanation
Correct option:
Create a TCP listener using a Network Load Balancer and implement mTLS on the target
By default, the TLS protocol only requires a server to authenticate itself to the client. The authentication of the client to the server is managed by the application layer. The TLS protocol also offers the ability for the server to request that the client send an X.509 certificate to prove its identity. This is called mutual TLS (mTLS) as both parties are authenticated via certificates with TLS.
For mTLS support, you need to create a TCP listener using a Network Load Balancer or a Classic Load Balancer and implement mTLS on the target. The load balancer passes the request through as is, so you can implement mTLS on the target.
Incorrect options:
Configure an mTLS listener on an Application Load Balancer and enable mutual TLS authentication for better security of the application - Application Load Balancer supports mTLS. If you have an HTTPS application, AWS recommends you consider ALB if you’d like to perform application level routing. For example, performing weighted round robin load balancing for HTTPS requests, which will allow you to create blue/green style deployments. ALB will also allow you to offload the TLS/mTLS operations. Since the ALB terminates client’s TLS session, you’ll need to upload certificates for the ALB. NLB, on the other hand, operates at the transport layer (layer 4 of the OSI model), and provides low latency load balancing of TCP/UDP connections. For an HTTPS application, AWS recommends using Network Load Balancer (NLB) if you have specific security compliance rules that need the server to terminate the client’s TLS connection.
For the given use case, you cannot configure ALB such that the server terminates the client’s TLS connection, therefore this option is incorrect.
Network Load Balancers support TLS renegotiation and mutual TLS authentication (mTLS). Configure a TLS listener for a Network Load Balancer to use mutual TLS authentication - Network Load Balancers do not support TLS renegotiation or mutual TLS authentication (mTLS). For mTLS support, you need to create a TCP listener instead of a TLS listener.
Configure a TCP listener on an Application Load Balancer and enable mutual TLS authentication on it - Application Load Balancer only supports HTTP and HTTPS protocols. Network Load Balancer is used for TCP traffic.
References:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/mutual-authentication.html
Question 8 Multiple Choice
A company uses Amazon EC2 instances (fronted by an Application Load Balancer) with Amazon RDS MySQL as the database. Now, the company wants to store sensitive client data and needs to follow strict security and compliance guidelines. Data must be end-to-end secured while in-transit, as well as, at-rest. The company needs a solution that can implement strict security guidelines while keeping the cost and operational overhead to a minimum.
Which combination of steps will meet all the requirements? (Select three)
Explanation

Click "Show Answer" to see the explanation here
Correct options:
Use TLS certificates from AWS Certificate Manager (ACM) with an Application Load Balancer. Deploy self-signed certificates on the EC2 instances
Public ACM certificates can be installed on Amazon EC2 instances that are connected to a Nitro Enclave, but not to other Amazon EC2 instances. In general, to serve secure content over SSL/TLS, load balancers require that SSL/TLS certificates be installed on either the load balancer or the back-end Amazon EC2 instance. ACM is integrated with Elastic Load Balancing to deploy ACM certificates on the load balancer.
Ensure that the database client software uses a TLS connection to Amazon RDS. Enable encryption of the Amazon RDS DB instance
You can use Secure Socket Layer (SSL) or Transport Layer Security (TLS) from your application to encrypt a connection to a DB instance running MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL.
SSL/TLS connections provide a layer of security by encrypting data that moves between your client and DB instance.
Enable encryption on the Amazon Elastic Block Store (Amazon EBS) volumes that support the Amazon EC2 instances
Use Amazon EBS encryption as a straightforward encryption solution for your EBS resources associated with your EC2 instances. With Amazon EBS encryption, you aren't required to build, maintain, and secure your own key management infrastructure. Amazon EBS encryption uses AWS KMS keys when creating encrypted volumes and snapshots.
Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage.
Incorrect options:
Use AWS CloudHSM to generate TLS certificates for the Amazon EC2 instances. Install the TLS certificates on the Amazon EC2 instances - Use AWS CloudHSM when you need to manage the HSMs that generate and store your encryption keys. In AWS CloudHSM, you create and manage HSMs, including creating users and setting their permissions. You also create the symmetric keys and asymmetric key pairs that the HSM stores. CloudHSM is neither cost-efficient nor operationally efficient for this use case.
Use Amazon CloudFront with AWS Web Application Firewall (AWS WAF). Send HTTP connections to the origin Amazon EC2 instances - You need to configure Amazon CloudFront to use HTTPS with your origin so that connections are encrypted when CloudFront communicates with your origin. HTTP option is incorrect since data has to be encrypted end-to-end while in-transit.
Use TLS certificates from a third-party vendor with an Application Load Balancer. Configure the same certificates on the Amazon EC2 instances - Application Load Balancers do not support mutual TLS authentication (mTLS). Using a third-party vendor adds to the operational overhead as opposed to using AWS ACM. TLS certificate from ACM should be configured on the ALB and a self-signed certificate should be set up on the Amazon EC2 instances.
References:
https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
Explanation
Correct options:
Use TLS certificates from AWS Certificate Manager (ACM) with an Application Load Balancer. Deploy self-signed certificates on the EC2 instances
Public ACM certificates can be installed on Amazon EC2 instances that are connected to a Nitro Enclave, but not to other Amazon EC2 instances. In general, to serve secure content over SSL/TLS, load balancers require that SSL/TLS certificates be installed on either the load balancer or the back-end Amazon EC2 instance. ACM is integrated with Elastic Load Balancing to deploy ACM certificates on the load balancer.
Ensure that the database client software uses a TLS connection to Amazon RDS. Enable encryption of the Amazon RDS DB instance
You can use Secure Socket Layer (SSL) or Transport Layer Security (TLS) from your application to encrypt a connection to a DB instance running MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL.
SSL/TLS connections provide a layer of security by encrypting data that moves between your client and DB instance.
Enable encryption on the Amazon Elastic Block Store (Amazon EBS) volumes that support the Amazon EC2 instances
Use Amazon EBS encryption as a straightforward encryption solution for your EBS resources associated with your EC2 instances. With Amazon EBS encryption, you aren't required to build, maintain, and secure your own key management infrastructure. Amazon EBS encryption uses AWS KMS keys when creating encrypted volumes and snapshots.
Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage.
Incorrect options:
Use AWS CloudHSM to generate TLS certificates for the Amazon EC2 instances. Install the TLS certificates on the Amazon EC2 instances - Use AWS CloudHSM when you need to manage the HSMs that generate and store your encryption keys. In AWS CloudHSM, you create and manage HSMs, including creating users and setting their permissions. You also create the symmetric keys and asymmetric key pairs that the HSM stores. CloudHSM is neither cost-efficient nor operationally efficient for this use case.
Use Amazon CloudFront with AWS Web Application Firewall (AWS WAF). Send HTTP connections to the origin Amazon EC2 instances - You need to configure Amazon CloudFront to use HTTPS with your origin so that connections are encrypted when CloudFront communicates with your origin. HTTP option is incorrect since data has to be encrypted end-to-end while in-transit.
Use TLS certificates from a third-party vendor with an Application Load Balancer. Configure the same certificates on the Amazon EC2 instances - Application Load Balancers do not support mutual TLS authentication (mTLS). Using a third-party vendor adds to the operational overhead as opposed to using AWS ACM. TLS certificate from ACM should be configured on the ALB and a self-signed certificate should be set up on the Amazon EC2 instances.
References:
https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
Question 9 Single Choice
A financial services company is running an Amazon RDS for MySQL DB instance in a virtual private cloud (VPC) to store sensitive customer data. Due to strict security policies, the company has implemented a VPC that does not allow any network traffic to or from the internet. A security engineer at the company wants to use AWS Secrets Manager to automatically rotate the DB instance credentials for increased security. However, due to the company's security policy, the engineer is not allowed to use the standard AWS Lambda function provided by Secrets Manager to rotate the credentials.
To address this issue, the security engineer deploys a custom Lambda function within the VPC. This function is responsible for rotating the secret in Secrets Manager. The security engineer also edits the DB instance's security group to allow connections from this custom Lambda function. However, when the function is invoked, it is unable to communicate with Secrets Manager and cannot rotate the secret.
Which of the following options will address the given scenario?
Explanation

Click "Show Answer" to see the explanation here
Correct option:
Add a VPC Interface Endpoint for Secrets Manager and configure the Lambda function's subnet to use it
A VPC interface endpoint is a VPC component that enables the communication between resources in a VPC and services powered by AWS PrivateLink, without the need for an internet gateway, NAT device, VPN connection or AWS Direct Connect link. It allows for communication between the VPC and the service over an Amazon-provided private IP address, eliminating exposure to the public internet.
AWS PrivateLink enables you to access services over an Amazon-provided IP address from within your VPC, without using public IPs or an internet gateway. With VPC interface endpoint, you can create a private connection between your VPC and supported services powered by AWS PrivateLink, using VPC endpoint services powered by AWS PrivateLink.
A service provider creates an endpoint service to make their service available in a Region. A service consumer creates a VPC endpoint to connect their VPC to an endpoint service. A service consumer must specify the service name of the endpoint service when creating a VPC endpoint.
How AWS PrivateLink works:
via - https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html
Secrets Manager enables you to replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure the secret can't be compromised by someone examining your code, because the secret no longer exists in the code. Also, you can configure Secrets Manager to automatically rotate the secret for you according to a specified schedule.
via - https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
This is the correct option as it allows the custom Lambda function in the VPC to communicate with Secrets Manager without going through the internet. A VPC endpoint for Secrets Manager is a VPC component that enables the communication between the VPC and Secrets Manager without going through the internet or a VPN connection.
Incorrect options:
Create a NAT Gateway in the VPC. Configure the Lambda function to use the NAT Gateway for connecting to the Secrets Manager - You can configure a Lambda function to connect to private subnets in a virtual private cloud (VPC) in your AWS account. Use Amazon Virtual Private Cloud (Amazon VPC) to create a private network for resources such as databases, cache instances, or internal services. Connect your function to the VPC to access private resources while the function is running. As mentioned in the explanation above, you can leverage the VPC endpoint to connect to Secrets Manager from within the VPC and thereby avoid the internet.
Internet access from a private subnet requires network address translation (NAT). To give your Lambda function access to the internet, you need to route outbound traffic to a NAT gateway in a public subnet. If you configure the Lambda function to use the NAT Gateway to connect to the Secrets Manager, you will end up using the internet. Therefore, this option serves as a distractor.
Create a Direct Connect connection between the VPC and Secrets Manager and configure the Lambda function's subnet to use it - AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router. This option is incorrect because Direct Connect is used to establish a dedicated network connection between an on-premises data center and a VPC, not between a VPC and a service like Secrets Manager.
Create a VPC Peering connection between the VPC and Secrets Manager and configure the Lambda function's subnet to use it - This option is incorrect because VPC Peering is used to connect two VPCs together, not a VPC and a service. Secrets Manager is not a VPC, it's a service, therefore it cannot be connected via VPC Peering.
References:
https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_vpc-endpoint.html
Explanation
Correct option:
Add a VPC Interface Endpoint for Secrets Manager and configure the Lambda function's subnet to use it
A VPC interface endpoint is a VPC component that enables the communication between resources in a VPC and services powered by AWS PrivateLink, without the need for an internet gateway, NAT device, VPN connection or AWS Direct Connect link. It allows for communication between the VPC and the service over an Amazon-provided private IP address, eliminating exposure to the public internet.
AWS PrivateLink enables you to access services over an Amazon-provided IP address from within your VPC, without using public IPs or an internet gateway. With VPC interface endpoint, you can create a private connection between your VPC and supported services powered by AWS PrivateLink, using VPC endpoint services powered by AWS PrivateLink.
A service provider creates an endpoint service to make their service available in a Region. A service consumer creates a VPC endpoint to connect their VPC to an endpoint service. A service consumer must specify the service name of the endpoint service when creating a VPC endpoint.
How AWS PrivateLink works:
via - https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html
Secrets Manager enables you to replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure the secret can't be compromised by someone examining your code, because the secret no longer exists in the code. Also, you can configure Secrets Manager to automatically rotate the secret for you according to a specified schedule.
via - https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
This is the correct option as it allows the custom Lambda function in the VPC to communicate with Secrets Manager without going through the internet. A VPC endpoint for Secrets Manager is a VPC component that enables the communication between the VPC and Secrets Manager without going through the internet or a VPN connection.
Incorrect options:
Create a NAT Gateway in the VPC. Configure the Lambda function to use the NAT Gateway for connecting to the Secrets Manager - You can configure a Lambda function to connect to private subnets in a virtual private cloud (VPC) in your AWS account. Use Amazon Virtual Private Cloud (Amazon VPC) to create a private network for resources such as databases, cache instances, or internal services. Connect your function to the VPC to access private resources while the function is running. As mentioned in the explanation above, you can leverage the VPC endpoint to connect to Secrets Manager from within the VPC and thereby avoid the internet.
Internet access from a private subnet requires network address translation (NAT). To give your Lambda function access to the internet, you need to route outbound traffic to a NAT gateway in a public subnet. If you configure the Lambda function to use the NAT Gateway to connect to the Secrets Manager, you will end up using the internet. Therefore, this option serves as a distractor.
Create a Direct Connect connection between the VPC and Secrets Manager and configure the Lambda function's subnet to use it - AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router. This option is incorrect because Direct Connect is used to establish a dedicated network connection between an on-premises data center and a VPC, not between a VPC and a service like Secrets Manager.
Create a VPC Peering connection between the VPC and Secrets Manager and configure the Lambda function's subnet to use it - This option is incorrect because VPC Peering is used to connect two VPCs together, not a VPC and a service. Secrets Manager is not a VPC, it's a service, therefore it cannot be connected via VPC Peering.
References:
https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_vpc-endpoint.html
Question 10 Multiple Choice
The development team at a company deploys to their AWS production environment through a continuous integration/continuous deployment (CI/CD) pipeline. The pipeline itself has broad access to create AWS resources needed to run the application. The company's security team wants to allow the development team to deploy their own IAM principals and policies for their application. However, the security team also needs a control mechanism that requires all resources created by the pipeline to have minimum privileges that comply with the security guidelines. All teams at the company are only allowed to modify the AWS production environment through their CI/CD pipeline.
Which options will you combine to address this use case? (Select two)
Explanation

Click "Show Answer" to see the explanation here
Correct options:
Create an IAM role for the CI/CD pipeline to be used for deploying application resources
The CI/CD pipeline role has broad access to the account to create resources. Access for deployment through the CI/CD pipeline should be tightly controlled and monitored. The CI/CD pipeline is allowed to create new IAM roles for use with the application, but those roles are limited to only the actions allowed by the permissions boundary.
The security team should create a permissions boundary policy and attach it to the IAM role used by the CI/CD pipeline
A permissions boundary is a type of identity-based policy that doesn’t directly grant access. Instead, like an SCP, a permissions boundary acts as a guardrail for your IAM principals that allows you to set coarse-grained access controls. A permissions boundary is typically used to delegate the creation of IAM principals. Delegation enables other individuals in your accounts to create new IAM principals but limits the permissions that can be granted to the new IAM principals.
An example of the permissions boundary policy that the security team should attach to IAM roles created by the CI/CD pipeline is shown below. This same permissions boundary policy can be centrally managed and attached to IAM roles created by other pipelines at Financial Corp. The policy describes the maximum possible permissions that additional roles created by the development team are allowed to have, and it limits those permissions to some Amazon S3 and Amazon SQS data access actions. It’s common for a permissions boundary policy to include data access actions when used to delegate role creation. This is because most applications only need permission to read and write data and only sometimes need permission to modify infrastructure.
The roles, policies, and EC2 instance profiles that the pipeline creates should also be restricted to specific role paths. This enables you to enforce that the pipeline can only modify roles and policies or pass roles that it has created. This helps prevent the pipeline, and roles created by the pipeline, from elevating privileges by modifying or passing a more privileged role.
Example permissions boundary policy attached to IAM roles created by the CI/CD pipeline:
via - https://aws.amazon.com/blogs/security/iam-policy-types-how-and-when-to-use-them/
Incorrect options:
Create an IAM role for the CI/CD pipeline to be used for deploying application resources Also, create resource-based policies for all the AWS resources created by the CI/CD pipeline - Access can be granted by either an identity-based policy or a resource-based policy when access is within the same AWS account. Therefore, using resource-based policies for the given use case is unnecessary. You should also note that all AWS resources do not support resource-based policies. This option acts as a distractor.
The development team should create a permissions boundary policy and attach it to all the IAM roles created by the CI/CD pipeline - Permission boundary policy should be created by the security team for central access and exercising control over the permissions configured by other teams.
Create a Service Control Policy (SCP) and attach it to all the member accounts to monitor and control the access privileges given to the IAM roles in the AWS accounts - Service control policies (SCPs) are a feature of AWS Organizations. AWS Organizations is a service for grouping and centrally managing the AWS accounts that your business owns. SCPs are policies that specify the maximum permissions for an organization, organizational unit (OU), or individual account. An SCP can limit permissions for principals in member accounts, including the AWS account root user. This option has been added as a distractor since the use case does not mention anything about using AWS Organizations.
Reference:
https://aws.amazon.com/blogs/security/iam-policy-types-how-and-when-to-use-them/
Explanation
Correct options:
Create an IAM role for the CI/CD pipeline to be used for deploying application resources
The CI/CD pipeline role has broad access to the account to create resources. Access for deployment through the CI/CD pipeline should be tightly controlled and monitored. The CI/CD pipeline is allowed to create new IAM roles for use with the application, but those roles are limited to only the actions allowed by the permissions boundary.
The security team should create a permissions boundary policy and attach it to the IAM role used by the CI/CD pipeline
A permissions boundary is a type of identity-based policy that doesn’t directly grant access. Instead, like an SCP, a permissions boundary acts as a guardrail for your IAM principals that allows you to set coarse-grained access controls. A permissions boundary is typically used to delegate the creation of IAM principals. Delegation enables other individuals in your accounts to create new IAM principals but limits the permissions that can be granted to the new IAM principals.
An example of the permissions boundary policy that the security team should attach to IAM roles created by the CI/CD pipeline is shown below. This same permissions boundary policy can be centrally managed and attached to IAM roles created by other pipelines at Financial Corp. The policy describes the maximum possible permissions that additional roles created by the development team are allowed to have, and it limits those permissions to some Amazon S3 and Amazon SQS data access actions. It’s common for a permissions boundary policy to include data access actions when used to delegate role creation. This is because most applications only need permission to read and write data and only sometimes need permission to modify infrastructure.
The roles, policies, and EC2 instance profiles that the pipeline creates should also be restricted to specific role paths. This enables you to enforce that the pipeline can only modify roles and policies or pass roles that it has created. This helps prevent the pipeline, and roles created by the pipeline, from elevating privileges by modifying or passing a more privileged role.
Example permissions boundary policy attached to IAM roles created by the CI/CD pipeline:
via - https://aws.amazon.com/blogs/security/iam-policy-types-how-and-when-to-use-them/
Incorrect options:
Create an IAM role for the CI/CD pipeline to be used for deploying application resources Also, create resource-based policies for all the AWS resources created by the CI/CD pipeline - Access can be granted by either an identity-based policy or a resource-based policy when access is within the same AWS account. Therefore, using resource-based policies for the given use case is unnecessary. You should also note that all AWS resources do not support resource-based policies. This option acts as a distractor.
The development team should create a permissions boundary policy and attach it to all the IAM roles created by the CI/CD pipeline - Permission boundary policy should be created by the security team for central access and exercising control over the permissions configured by other teams.
Create a Service Control Policy (SCP) and attach it to all the member accounts to monitor and control the access privileges given to the IAM roles in the AWS accounts - Service control policies (SCPs) are a feature of AWS Organizations. AWS Organizations is a service for grouping and centrally managing the AWS accounts that your business owns. SCPs are policies that specify the maximum permissions for an organization, organizational unit (OU), or individual account. An SCP can limit permissions for principals in member accounts, including the AWS account root user. This option has been added as a distractor since the use case does not mention anything about using AWS Organizations.
Reference:
https://aws.amazon.com/blogs/security/iam-policy-types-how-and-when-to-use-them/



