6

Create an engaging and informative illustration depicting a cloud landscape with AWS icons representing DevOps tools and practices, with diverse professionals collaborating in a modern tech environment.

AWS DevOps Quiz

Test your knowledge on AWS DevOps practices with this comprehensive quiz designed for both beginners and experienced professionals. Dive into topics such as automation, security, and best practices to strengthen your understanding of AWS services.

Topics covered include:

  • AWS CloudFormation
  • AWS CodeBuild
  • Continuous Integration/Continuous Deployment (CI/CD)
  • Security Best Practices
  • Monitoring and Logging
10 Questions2 MinutesCreated by LearningCloud101

A company wants to automatically re-create its infrastructure using AWS CloudFormation as part of the company's quality assurance (QA) pipeline. For each QA run, a new VPC must be created in a single account, resources must be deployed into the VPC, and tests must be run against this new infrastructure. The company policy states that all VPCs must be peered with a central management VPC to allow centralized logging. The company has existing CloudFormation templates to deploy its VPC and associated resources.

Which combination of steps will achieve the goal in a way that is automated and repeatable? (Choose two.)

Create an AWS Lambda function that is invoked by an Amazon CloudWatch Events rule when a CreateVpcPeeringConnection API call is made. The Lambda function should check the source of the peering request, accepts the request, and update the route tables for the management VPC to allow traffic to go over the peering connection.
In the CloudFormation template:Invoke a custom resource to generate unique VPC CIDR ranges for the VPC and subnets.Create a peering connection to the management VPC.Update route tables to allow traffic to the management VPC.
In the CloudFormation template:Use the Fn::Cidr function to allocate an unused CIDR range for the VPC and subnets.Create a peering connection to the management VPC.Update route tables to allow traffic to the management VPC.
Modify the CloudFormation template to include a mappings object that includes a list of /16 CIDR ranges for each account where the stack will be deployed.
Use CloudFormation StackSets to deploy the VPC and associated resources to multiple AWS accounts using a custom resource to allocate unique CIDR ranges. Create peering connections from each VPC to the central management VPC and accept those connections in the management VPC.

A retail company has adopted AWS OpsWorks for managing its deployments. In the last three months, the company has discovered that some production instances have been restarting without reason. Upon inspection of the AWS CloudTrail logs, a DevOps Engineer determined that those instances were restarted by OpsWorks. The Engineer now wants automated email notifications whenever OpsWorks restarts an instance when the instance is deemed unhealthy or unable to communicate with the service endpoint.

How can the Engineer meet this requirement?

Create a Chef recipe to place a cron to run a custom script within the Amazon EC2 instances that sends an email to the team by using Amazon SES if the OpsWorks agent detects an instance failure.
Create an Amazon SNS topic and create a subscription for this topic that contains the destination email address. Create an Amazon CloudWatch rule: specify as a source and specify auto-healing in the initiated_by details. Use the SNS topic as a target. aws.opsworks
Create an Amazon SNS topic and create a subscription for this topic that contains the destination email address. Create an Amazon CloudWatch rule specify as a source and specify instance-replacement in the initiated_by details. Use the SNS topic as a target. aws.opsworks
Create a subscription for this topic that contains the email address. Enable instance restart notifications within the OpsWorks layer and indicate the destination email address for the notification

A company is using AWS Organizations to create separate AWS accounts for each of its departments. It needs to automate the following tasks:

Updating the Linux AMIs with new patches periodically and generating a golden image Installing a new version of Chef agents in the golden image, if available
Enforcing the use of the newly generated golden AMIs in the department's account Which option requires the LEAST management overhead?

Write a script to launch an Amazon EC2 instance from the previous golden AMI, apply the patch updates, install the new version of the Chef agent, generate a new golden AMI, and then modify the AMI permissions to share only the new image with the departments’ accounts.
Use an AWS Systems Manager Run Command to update the Chef agent first, use Amazon EC2 Systems Manager Automation to generate an updated AMI, and then assume an IAM role to copy the new golden AMI into the departments’ accounts.
Use AWS Systems Manager Automation to update the Linux AMI using the previous image, provide the URL for the script that will update the Chef agent, and then use AWS Organizations to replace the previous golden AMI into the departments’ accounts.
Use AWS Systems Manager Automation to update the Linux AMI from the previous golden image, provide the URL for the script that will update the Chef agent, and then share only the newly generated AMI with the departments’ accounts.

A DevOps engineer is researching the least expensive way to implement an image batch processing cluster on AWS. The application cannot run in Docker containers and must run on Amazon EC2. The batch job stores checkpoint data on an NFS and can tolerate interruptions. Configuring the cluster software from a generic EC2 Linux image takes 30 minutes.

What is the MOST cost-effective solution?

Use Amazon EFS for checkpoint data. To complete the job. Use an EC2 Auto Scaling group and an On-Demand pricing model to provision EC2 instances temporarily.
Use GlusterFS on EC2 instances for checkpoint data. To run the batch job. configure EC2 instances manually. When the job completes, shut down the instances manually.
Use Amazon EFS for checkpoint data. Use EC2 Fleet to launch EC2 Spot Instances, and utilize user data to configure the EC2 Linux instance on startup.
Use Amazon EFS for checkpoint data. Use EC2 Fleet to launch EC2 Spot Instances. Create a custom AMI for the cluster and use the latest AMI when creating instances

A company using AWS CodeCommit for source control wants to automate its continuous integration and continuous deployment pipeline on AWS in its development environment. The company has three requirements:

1. There must be a legal and a security review of any code change to make sure sensitive information is not leaked through the source code.

2. Every change must go through unit testing.
3. Every change must go through a suite of functional testing to ensure functionality.
In addition, the company has the following requirements for automation:
1. Code changes should automatically trigger the CI/CD pipellline.
2. Any failure in the pipeline should notify devops-admin@xyz.com.
3. There must be an approval to stage the assets to Amazon S3 after tests have been performed.
What should a DevOps Engineer do to meet all of these requirements while following CI/CD best practices?

Commit to the development branch and trigger AWS CodePipeline from the development branch. Make an individual stage in CodePipeline for security review, unit tests, functional tests, and manual approval. Use Amazon CloudWatch metrics to detect changes in pipeline stages and Amazon SES for emailing devops- admin@xyz.com.
Commit to mainline and trigger AWS CodePipeline from mainline. Make an individual stage in CodePipeline for security review, unit tests, functional tests, and manual approval. Use AWS CloudTrail logs to detect changes in pipeline stages and Amazon SNS for emailing devops-admin@xyz.com.
Commit to the development branch and trigger AWS CodePipeline from the development branch. Make an individual stage in CodePipeline for security review, unit tests, functional tests, and manual approval. Use Amazon CloudWatch Events to detect changes in pipeline stages and Amazon SNS for emailing devops- admin@xyz.com.
Commit to mainline and trigger AWS CodePipeline from mainline. Make an individual stage in CodePipeline for security review, unit tests, functional tests, and manual approval. Use Amazon CloudWatch Events to detect changes in pipeline stages and Amazon SES for emailing devops-admin@xyz.com.

The Security team depends on AWS CloudTrail to detect sensitive security issues in the company's AWS account. The DevOps Engineer needs a solution to auto-remediate CloudTrail being turned off in an AWS account.

What solution ensures the LEAST amount of downtime for the CloudTrail log deliveries?

Create an Amazon CloudWatch Events rule for the CloudTrail StopLogging event. Create an AWS Lambda function that uses the AWS SDK to call StartLogging on the ARN of the resource in which StopLogging was called. Add the Lambda function ARN as a target to the CloudWatch Events rule.
Deploy the AWS-managed CloudTrail-enabled AWS Config rule, set with a periodic interval of 1 hour. Create an Amazon CloudWatch Events rule for AWS Config rules compliance change. Create an AWS Lambda function that uses the AWS SDK to call StartLogging on the ARN of the resource in which StopLogging was called. Add the Lambda function ARN as a target to the CloudWatch Events rule.
Create an Amazon CloudWatch Events rule for a scheduled event every 5 minutes. Create an AWS Lambda function that uses the AWS SDK to call StartLogging on an CloudTrail trail in the AWS account. Add the Lambda function ARN as a target to the CloudWatch Events rule.
Launch a t2.nano instance with a script running every 5 minutes that uses the AWS SDK to query CloudTrail in the current account. If the CloudTrail trail is disabled, have the script re-enable the trail.

Your application stores sensitive information on an EBS volume attached to your EC2 instance. How can you protect your information? Choose two answers from the options given below

Unmount the EBS volume, take a snapshot and encrypt the snapshot. Re-mount the Amazon EBS volume
It is not possible to encrypt an EBS volume, you must use a lifecycle policy to transfer data to S3 for encryption.
Copy the unencrypted snapshot and check the box to encrypt the new snapshot. Volumes restored from this encrypted snapshot will also be encrypted.
Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume *t

A security review has identified that an AWS CodeBuild project is downloading a database population script from an Amazon S3 bucket using an unauthenticated request. The security team does not allow unauthenticated requests to S3 buckets for this project.

How can this issue be corrected in the MOST secure manner?

Add the bucket name to the AllowedBuckets section of the CodeBuild project settings. Update the build spec to use the AWS CLI to download the database population script.
Modify the S3 bucket settings to enable HTTPS basic authentication and specify a token. Update the build spec to use cURL to pass the token and download the database population script.
Remove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.
Remove unauthenticated access from the S3 bucket with a bucket policy. Use the AWS CLI to download the database population script using an IAM access key and a secret access key.

A DevOps team needs to query information in application logs that are generated by an application running multiple Amazon EC2 instances deployed with AWS Elastic Beanstalk.
Instance log streaming to Amazon CloudWatch Logs was enabled on Elastic Beanstalk. Which approach would be the MOST cost-efficient?

Use a CloudWatch Logs subscription to trigger an AWS Lambda function to send the log data to an Amazon Kinesis Data Firehouse stream that has an Amazon S3 bucket destination. Use Amazon Athena to query the log data from the bucket.
Use a CloudWatch Logs subscription to trigger an AWS Lambda function to send the log data to an Amazon Kinesis Data Firehouse stream that has an Amazon S3 bucket destination. Use a new Amazon Redshift cluster and Amazon Redshift Spectrum to query the log data from the bucket.
Use a CloudWatch Logs subscription to send the log data to an Amazon Kinesis Data Firehouse stream that has an Amazon S3 bucket destination. Use Amazon Athena to query the log data from the bucket.
Use a CloudWatch Logs subscription to send the log data to an Amazon Kinesis Data Firehouse stream that has an Amazon S3 bucket destination. Use a new Amazon Redshift cluster and Amazon Redshift Spectrum to query the log data from the bucket.

A DevOps Engineer has been asked by the Security team to ensure that AWS CloudTrail files are not tampered with after being created. Currently, there is a process with multiple trails, using AWS IAM to restrict access to specific trails. The Security team wants to ensure they can trace the integrity of each file and make sure there has been no tampering.

Which option will require the LEAST effort to implement and ensure the legitimacy of the file while allowing the Security team to prove the authenticity of the logs?

Create an Amazon CloudWatch Events rule that triggers an AWS Lambda function when a new file is delivered. Configure the Lambda function to perform an MD5 hash check on the file, store the name and location of the file, and post the returned hash to an Amazon DynamoDB table. The Security team can use the values stored in DynamoDB to verify the file authenticity.
Enable the CloudTrail file integrity feature on an Amazon S3 bucket. Create an IAM policy that grants the Security team access to the file integrity logs stored in the S3 bucket.
Enable the CloudTrail file integrity feature on the trail. Use the digest file created by CloudTrail to verify the integrity of the delivered CloudTrail files
Create an AWS Lambda function that is triggered each time a new file is delivered to the CloudTrail bucket. Configure the Lambda function to execute an MD5 hash check on the file, and store the result on a tag in an Amazon S3 object. The Security team can use the information on the tag to verify the integrity of the file.
{"name":"6", "url":"https://www.quiz-maker.com/QPREVIEW","txt":"Test your knowledge on AWS DevOps practices with this comprehensive quiz designed for both beginners and experienced professionals. Dive into topics such as automation, security, and best practices to strengthen your understanding of AWS services.Topics covered include:AWS CloudFormationAWS CodeBuildContinuous Integration\/Continuous Deployment (CI\/CD)Security Best PracticesMonitoring and Logging","img":"https:/images/course4.png"}
Powered by: Quiz Maker