4

A futuristic digital landscape featuring cloud computing elements, with representations of AWS services like EC2, RDS, and a DevOps pipeline, all set against a blue and white color scheme.

AWS DevOps Challenge

Test your knowledge and skills in AWS DevOps with this comprehensive quiz designed for professionals aiming to excel in cloud solutions.

Whether you're looking to validate your expertise or learn new concepts, this quiz covers essential topics:

  • Disaster Recovery Strategies
  • CloudFormation Security
  • API Performance Optimization
  • AWS Security Best Practices
10 Questions2 MinutesCreated by CodingEagle732

A DevOps Engineer manages an application that has a cross-region failover requirement. The application stores its data in an Amazon Aurora on Amazon RDS database in the primary region with a read replica in the secondary region. The application uses Amazon Route 53 to direct customer traffic to the active region.

Which steps should be taken to MINIMIZE downtime if a primary database fails?

Use Amazon CloudWatch to monitor the status of the RDS instance. In the event of a failure, use a CloudWatch Events rule to send a short message service (SMS) to the Systems Operator using Amazon SNS. Have the Systems Operator redirect traffic to an Amazon S3 static website that displays a downtime message. Promote the RDS read replica to the master. Confirm that the application is working normally, then redirect traffic from the Amazon S3 website to the secondary region.
Use RDS Event Notification to publish status updates to an Amazon SNS topic. Use an AWS Lambda function subscribed to the topic to monitor database health. In the event of a failure, the Lambda function promotes the read replica, then updates Route 53 to redirect traffic from the primary region to the secondary region.
Set up an Amazon CloudWatch Events rule to periodically invoke an AWS Lambda function that checks the health of the primary database. If a failure is detected, the Lambda function promotes the read replica. Then, update Route 53 to redirect traffic from the primary to the secondary region.
Set up Route 53 to balance traffic between both regions equally. Enable the Aurora multi-master option, then set up a Route 53 health check to analyze the health of the databases. Configure Route 53 to automatically direct all traffic to the secondary region when a primary database fails.

A healthcare company has a critical application running in AWS. Recently, the company experienced some down time. If it happens again, the company needs to be able to recover its application in another AWS Region. The application uses Elastic Load Balancing and Amazon EC2 instances. The company also maintains a custom AMI that contains its application. This AMI is changed frequently.

The workload is required to run in the primary region, unless there is a regional service disruption, in which case traffic should fail over to the new region.

Additionally, the cost for the second region needs to be low. The RTO is 2 hours.

Which solution allows the company to fail over to another region in the event of a failure, and also meet the above requirements?

Maintain a copy of the AMI from the main region in the backup region. Create an Auto Scaling group with one instance using a launch configuration that contains the copied AMI. Use an Amazon Route 53 record to direct traffic to the load balancer in the backup region in the event of failure, as required. Allow the Auto Scaling group to scale out as needed during a failure.
Automate the copying of the AMI in the main region to the backup region. Generate an AWS Lambda function that will create an EC2 instance from the AMI and place it behind a load balancer. Using the same Lambda function, point the Amazon Route 53 record to the load balancer in the backup region. Trigger the Lambda function in the event of a failure.
Place the AMI in a replicated Amazon S3 bucket. Generate an AWS Lambda function that can create a launch configuration and assign it to an already created Auto Scaling group. Have one instance in this Auto Scaling group ready to accept traffic. Trigger the Lambda function in the event of a failure. Use an Amazon Route 53 record and modify it with the same Lambda function to point to the load balancer in the backup region.
Automate the copying of the AMI to the backup region. Create an AWS Lambda function that can create a launch configuration and assign it to an already created Auto Scaling group. Set the Auto Scaling group maximum size to 0 and only increase it with the Lambda function during a failure. Trigger the Lambda function in the event of a failure. Use an Amazon Route 53 record and modify it with the same Lambda function to point to the load balancer in the backup region.

A company requires an RPO of 2 hours and an RTO of 10 minutes for its data and application at all times An application uses a MySQL database and Amazon EC2 web servers. The development learn needs a strategy for failover and disaster recovery

Which combination of deployment strategies will meet these requirements? {Select TWO)

Create an Amazon Aurora cluster in one Availability Zone across multiple Regions as the data store Use Aurora's automatic recovery capabilities in the event of a discluster.
Create an Amazon Aurora global database m two Regions as the data store In the event of a failure, promote the secondary Region as the master for the application
Create an Amazon Aurora multi-master cluster across multiple Regions as the data store Use an Network Load Balancer to balance the database traffic in different Regions.
Set up the application in two Regions and use Amazon Route 53 failover-based routing that points to the Application Load Balancers in both Regions Use health checks to determine the availability in a given Region. Use Auto Scaling groups in each Region to adjust capacity based on demand
Set up the application m two Regions and use a multi-Region Auto Scaling group behind Application Load Balancers to manage the capacity based on demand in the event of a disaster, adjust the Auto Scaling group's desired instance count to increase baseline capacity in the failover Region.

A company uses AWS Organizations lo manage multiple accounts. Information security policies require that all unencrypted Amazon EBS volumes be marked as non-compliant. A DevOps engineer needs to automatically deploy the solution and ensure that this compliance check is always present.

Which solution will accomplish this?

Create an AWS CloudFormation template that defines an AWS Inspector rule to check whether EBS encryption is enabled. Save the template to an Amazon S3 bucket that has been shared with all accounts within the company. Update the account creation script pointing to the CloudFormation template in Amazon S3.
Create an AWS Config organizational rule lo check whether EBS encryption is enabled and deploy the rule using the AWS CLI. Create and apply an SCP lo prohibit slopping and deleting AWS Config across the organization.
Create an SCP in Organizations. Set the policy to prevent the launch of Amazon EC2 instances without encryption on the EBS volumes using a conditional expression Apply the SCP to all AWS accounts. Use Amazon Athena to analyze the AWS CloudTrail output, looking for events that deny an ec2: Run instances action.
Deploy an IAM role to all accounts from a single trusted account. Build a pipeline with AWS CodePipeline with a stage m AWS Lambda to assume (he IAM role, and list all EBS volumes in the account Publish a report to Amazon S3.

An application running on a set of Amazon EC2 instances in an Auto Scaling group requires a configuration file to operate. The instances are created and maintained with AWS Cloud Formation. A DevOps engineer wants the instances to have the latest configuration file when launched, and wants changes to the configuration file to be reflected on all the instances with a minimal delay when the CloudFormation template is updated Company policy requires that application configuration files be maintained along with AWS infrastructure configuration files in source control.

Which solution will accomplish this?

In the CloudFormation template, add an AWS Config rule. Place the configuration file content in the rule's InputParameters property, and set the Scope property to the EC2 Auto Scaling group. Add an AWS Systems Manager Resource Data Sync resource to the template to poll for updates to the configuration.
In the CloudFormation template, add an EC2 launch template resource. Place the configuration file content in the launch template. Configure the cfn-init script to run when the instance is launched, and configure the cfn-hup script to poll for updates to the configuration.
In the CloudFormation template, add an EC2 launch template resource. Place the configuration file content in the launch template. Add an AWS Systems Manager Resource Data Sync resource to the template to poll for updates to the configuration.
In the CloudFormation template, add CloudFormation init metadata. Place the configuration file content in the metadata. Configure the cfn-init script to run when the instance is launched, and configure the cfn-hup script to poll for updates to the configuration.

A defect was discovered in production and a new sprint item has been created for deploying a hotfix. However, any code change must go through the following steps before going into production:

*Scan the code for security breaches, such as password and access key leaks. Run the code through extensive, long running unit tests.

Which source control strategy should a DevOps Engineer use in combination with AWS CodePipeline to complete this process?

Create a hotfix tag on the last commit of the master branch. Trigger the development pipeline from the hotfix tag. Use AWS CodeDeploy with Amazon ECS to do a content scan and run unit tests. Add a manual approval stage that merges the hotfix tag into the master branch.
Create a hotfix branch from the master branch. Triger the development pipeline from the hotfix branch. Use AWS CodeBuild to do a content scan and run unit tests. Add a manual approval stage that merges the hotfix branch into the master branch.
Create a hotfix branch from the master branch. Triger the development pipeline from the hotfix branch. Use AWS Lambda to do a content scan and run unit tests. Add a manual approval stage that merges the hotfix branch into the master branch.
Create a hotfix branch from the master branch. Create a separate source stage for the hotfix branch in the production pipeline. Trigger the pipeline from the hotfix branch. Use AWS Lambda to do a content scan and use AWS CodeBuild to run unit tests. Add a manual approval stage that merges the hotfix branch into the master branch.

A company is deploying a new mobile game on AWS for its customers around the world. The Development team uses AWS Code services and must meet the following requirements:

- Clients need to send/receive real-time playing data from the backend frequently and with minimal latency

- Game data must meet the data residency requirement

Which strategy can a DevOps Engineer implement to meet their needs?

Deploy the backend application to multiple regions. Any update to the code repository triggers a two-stage build and deployment pipeline. A successful deployment in one region invokes an AWS Lambda function to copy the build artifacts to an Amazon S3 bucket in another region. After the artifact is copied, it triggers a deployment pipeline in the new region.
Deploy the backend application to multiple Availability Zones in a single region. Create an Amazon CloudFront distribution to serve the application backend to global customers. Any update to the code repository triggers a two-stage build-and-deployment pipeline. The pipeline deploys the backend application to all Availability Zones.
Deploy the backend application to multiple regions. Use AWS Direct Connect to serve the application backend to global customers. Any update to the code repository triggers a two-stage build-and-deployment pipeline in the region. After a successful deployment in the region, the pipeline continues to deploy the artifact to another region.
Deploy the backend application to multiple regions. Any update to the code repository triggers a two-stage build-and-deployment pipeline in the region. After a successful deployment in the region, the pipeline invokes the pipeline in another region and passes the build artifact location. The pipeline uses the artifact location and deploys applications in the new region.

A devops team uses AWS CloudFormation to build their infrastructure. The security team is concerned about sensitive parameters, such as passwords, being exposed.

Which combination of steps will enhance the security of AWS CloudFormation? (Select THREE.)

Create a secure string with AWS KMS and choose a KMS encryption key. Reference the ARN of the secure string, and give AWS CloudFormation permission to the KMS key for decryption.
Create secrets using the AWS Secrets Manager AWS::SecretsManager::Secret resource type. Reference the secret resource return attributes in resources that need a password, such as an Amazon RDS database.
Store sensitive static data as secure strings in the AWS Systems Manager Parameter Store. Use dynamic references in the resources that need access to the data.
Store sensitive static data in the AWS Systems Manager Parameter Store as strings. Reference the stored value using types of Systems Manager parameters.
Use AWS KMS to encrypt the CloudFormation template.
Use the CloudFormation NoEcho parameter property to mask the parameter value.

A DevOps team manages an API running on-premises that serves as a backend for an Amazon API Gateway endpoint. Customers have been complaining about high response latencies, which the development team has verified using the API Gateway latency metrics in Amazon CloudWatch. To identify the cause, the team needs to collect relevant data without introducing additional latency.

Which actions should be taken to accomplish this? {Select TWO.)

Install the CloudWatch agent server side and configure the agent to upload relevant logs to CloudWatch.
Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and upload those segments to X-Ray during each request.
Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and use the X-Ray daemon to upload segments to X-Ray.
Modify the on-premises application to send log information back to API Gateway with each request.
Modify the on-premises application to calculate and upload statistical data relevant to the API service requests to CloudWatch metrics.

A company uses AWS KMS with CMKs and manual key rotation to meet regulatory compliance requirements. The security team wants to be notified when any keys have not been rotated after 90 days.

Which solution will accomplish this?

Configure AWS KMS to publish to an Amazon SNS topic when keys are more than 90 days old.
Configure an Amazon CloudWatch Events event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon SNS topic
Develop an AWS Config custom rule that publishes to an Amazon SNS topic when keys are more than 90 days old
Configure AWS Security Hub to publish to an Amazon SNS topic when keys are more than 90 days old.
{"name":"4", "url":"https://www.quiz-maker.com/QPREVIEW","txt":"Test your knowledge and skills in AWS DevOps with this comprehensive quiz designed for professionals aiming to excel in cloud solutions.Whether you're looking to validate your expertise or learn new concepts, this quiz covers essential topics:Disaster Recovery StrategiesCloudFormation SecurityAPI Performance OptimizationAWS Security Best Practices","img":"https:/images/course2.png"}
Powered by: Quiz Maker