Microsoft Certified Technology Specialist (MCTS) certifications enable professionals to target specific technologies and to distinguish themselves by demonstrating in-depth knowledge and expertise in their specialized technologies. more at http://www.certkingdom.com
The AWS Certified DevOps Engineer – Professional exam is intended for
individuals who perform a DevOps engineer role with two or more years of
experience provisioning, operating, and managing AWS environments.
Abilities Validated by the Certification Implement and manage continuous delivery systems and methodologies on AWS
Implement and automate security controls, governance processes, and compliance
validation
Define and deploy monitoring, metrics, and logging systems on AWS
Implement systems that are highly available, scalable, and self-healing on the
AWS platform
Design, manage, and maintain tools to automate operational processes
Recommended Knowledge and Experience Experience developing code in at least one high-level programming language
Experience building highly automated infrastructures
Experience administering operating systems
Understanding of modern development and operations processes and methodologies
Prepare for Your Exam There is no better preparation than hands-on experience. There are many
relevant AWS Training courses and other resources to assist you with acquiring
additional knowledge and skills to prepare for certification. Please review the
exam guide for information about the competencies assessed on the certification
exam.
Introduction The AWS Certified DevOps Engineer – Professional (DOP-CO1) exam validates
technical expertise in provisioning, operating, and managing distributed
application systems on the AWS platform. It is intended for individuals who
perform a DevOps Engineer role.
It validates an examinee’s ability to: Implement and manage continuous delivery systems and methodologies on AWS
Implement and automate security controls, governance processes, and compliance
validation
Define and deploy monitoring, metrics, and logging systems on AWS
Implement systems that are highly available, scalable, and self-healing on the
AWS platform
Design, manage, and maintain tools to automate operational processes
Recommended AWS Knowledge Two or more years’ experience provisioning, operating, and managing AWS
environments
Experience developing code in at least one high-level programming language
Experience building highly automated infrastructures
Experience administering operating systems
Understanding of modern development and operations processes and methodologies
Exam Preparation These training courses and materials may be helpful for examination
preparation:
AWS Training (aws.amazon.com/training) DevOps Engineering on AWS - https://aws.amazon.com/training/course-descriptions/devops-engineering/
There are two types of questions on the examination: Multiple-choice: Has one correct response option and three incorrect
responses (distractors).
Multiple-response: Has two or more correct responses out of five or more
options.
Select one or more responses that best complete the statement or answer the
question. Distractors, or incorrect answers, are response options that an
examinee with incomplete knowledge or skill would likely choose. However, they
are generally plausible responses that fit in the content area defined by the
test objective.
Unanswered questions are scored as incorrect; there is no penalty for guessing.
Unscored Content Your examination may include unscored items that are placed on the test to
gather statistical information. These items are not identified on the form and
do not affect your score.
Exam Results The AWS Certified DevOps Engineer Professional (DOP-C01) is a pass or fail
exam. The examination is scored against a minimum standard established by AWS
professionals who are guided by certification industry best practices and
guidelines.
Your results for the examination are reported as a score from 100-1000, with a
minimum passing score of 750. Your score shows how you performed on the
examination as a whole and whether or not you passed. Scaled scoring models are
used to equate scores across multiple exam forms that may have slightly
different difficulty levels.
Your score report contains a table of classifications of your performance at
each section level. This information is designed to provide general feedback
concerning your examination performance. The examination uses a compensatory
scoring model, which means that you do not need to “pass” the individual
sections, only the overall examination. Each section of the examination has a
specific weighting, so some sections have more questions than others. The table
contains general information, highlighting your strengths and weaknesses.
Exercise caution when interpreting section-level feedback.
Content Outline This exam guide includes weightings, test domains, and objectives only. It
is not a comprehensive listing of the content on this examination. The table
below lists the main content domains and their weightings.
Domain 1: SDLC Automation 22%
Domain 2: Configuration Management and Infrastructure as Code 19%
Domain 3: Monitoring and Logging 15%
Domain 4: Policies and Standards Automation 10%
Domain 5: Incident and Event Response 18%
Domain 6: High Availability, Fault Tolerance, and Disaster Recovery 16%
1.2 Determine source control strategies and how to implement them
1.3 Apply concepts required to automate and integrate testing
1.4 Apply concepts required to build and manage artifacts securely
1.5 Determine deployment/delivery strategies (e.g., A/B, Blue/green, Canary,
Red/black) and how to implement them using AWS Services
Domain 2: Configuration Management and Infrastructure as Code 2.1 Determine deployment services based on deployment needs
2.2 Determine application and infrastructure deployment models based on business
needs
2.3 Apply security concepts in the automation of resource provisioning
2.4 Determine how to implement lifecycle hooks on a deployment
2.5 Apply concepts required to manage systems using AWS configuration management
tools and services
Domain 3: Monitoring and Logging 3.1 Determine how to set up the aggregation, storage, and analysis of logs
and metrics
3.2 Apply concepts required to automate monitoring and event management of an
environment
3.3 Apply concepts required to audit, log, and monitor operating systems,
infrastructures, and applications
3.4 Determine how to implement tagging and other metadata strategies
Domain 4: Policies and Standards Automation 4.1 Apply concepts required to enforce standards for logging, metrics,
monitoring, testing, and security
4.2 Determine how to optimize cost through automation
4.3 Apply concepts required to implement governance strategies
Domain 5: Incident and Event Response 5.1 Troubleshoot issues and determine how to restore operations
5.2 Determine how to automate event management and alerting
5.3 Apply concepts required to implement automated healing
5.4 Apply concepts required to set up event-driven automated actions
Domain 6: High Availability, Fault Tolerance, and Disaster Recovery 6.1 Determine appropriate use of multi-AZ versus multi-region architectures
6.2 Determine how to implement high availability, scalability, and fault
tolerance
6.3 Determine the right services based on business needs (e.g., RTO/RPO, cost)
6.4 Determine how to design and automate disaster recovery strategies
6.5 Evaluate a deployment for points of failure
QUESTION: 1 You have an application which consists of EC2 instances in an Auto Scaling
group. Between a
particular time frame every day, there is an increase in traffic to your
website. Hence users are
complaining of a poor response time on the application. You have configured your
Auto Scaling group
to deploy one new EC2 instance when CPU utilization is greater than 60% for 2
consecutive periods of 5 minutes.
What is the least cost-effective way to resolve this problem?
A. Decrease the consecutive number of collection periods
B. Increase the minimum number of instances in the Auto Scaling group
C. Decrease the collection period to ten minutes
D. Decrease the threshold CPU utilization percentage at which to deploy a new
instance
Answer: B
QUESTION: 2 You have decided that you need to change the instance type of your
production instances which are
running as part of an AutoScaling group. The entire architecture is deployed
using CloudFormation
Template. You currently have 4 instances in Production. You cannot have any
interruption in service
and need to ensure 2 instances are always running during the update? Which of
the options below
listed can be used for this?
A. AutoScalingRollingUpdate
B. AutoScalingScheduledAction
C. AutoScalingReplacingUpdate
D. AutoScalinglntegrationUpdate
Answer: A
QUESTION: 3 You currently have the following setup in AWS
1) An Elastic Load Balancer
2) Auto Scaling Group which launches EC2 Instances
3) AMIs with your code pre-installed
You want to deploy the updates of your app to only a certain number of users.
You want to have a
cost-effective solution. You should also be able to revert back quickly. Which
of the below solutions is the most feasible one?
A. Create a second ELB, and a new Auto Scaling Group assigned a new Launch
Configuration. Create a
new AMI with the updated app. Use Route53 Weighted Round Robin records to adjust
the proportion of traffic hitting the two ELBs.
B. Create new AM Is with the new app. Then use the new EC2 instances in half
proportion to the older instances.
C. Redeploy with AWS Elastic Beanstalk and Elastic Beanstalk versions. Use Route
53 Weighted Round
Robin records to adjust the proportion of traffic hitting the two ELBs
D. Create a full second stack of instances, cut the DNS over to the new stack of
instances, and change the DNS back if a rollback is needed.
Answer: A
QUESTION: 4 Your application is currently running on Amazon EC2 instances behind a load
balancer. Your
management has decided to use a Blue/Green deployment strategy. How should you
implement this for each deployment?
A. Set up Amazon Route 53 health checks to fail over from any Amazon EC2
instance that is currently being deployed to.
B. Using AWS CloudFormation, create a test stack for validating the code, and
then deploy the code
to each production Amazon EC2 instance.
C. Create a new load balancer with new Amazon EC2 instances, carry out the
deployment, and then
switch DNS over to the new load balancer using Amazon Route 53 after testing.
D. Launch more Amazon EC2 instances to ensure high availability, de-register
each Amazon EC2
instance from the load balancer, upgrade it, and test it, and then register it
again with the load balancer.
Answer: C
QUESTION: 5 You have an application running a specific process that is critical to the
application's functionality,
and have added the health check process to your Auto Scaling Group. The
instances are showing
healthy but the application itself is not working as it should. What could be
the issue with the health check, since it is still showing the instances as
healthy.
A. You do not have the time range in the health check properly configured
B. It is not possible for a health check to monitor a process that involves the
application
C. The health check is not configured properly
D. The health check is not checking the application process