Browse Source

chore(devops projects): run prettier

pull/2333/head^2
nprimo 6 months ago committed by MSilva95
parent
commit
b4db2b65e9
  1. 431
      subjects/devops/cloud-design/README.md
  2. 204
      subjects/devops/cloud-design/audit/README.md
  3. 482
      subjects/devops/code-keeper/README.md
  4. 244
      subjects/devops/code-keeper/audit/README.md
  5. 15
      subjects/devops/crud-master/README.md
  6. 19
      subjects/devops/crud-master/audit/README.md
  7. 343
      subjects/devops/orchestrator/README.md
  8. 18
      subjects/devops/orchestrator/audit/README.md
  9. 1
      subjects/devops/play-with-containers/README.md
  10. 19
      subjects/devops/play-with-containers/audit/README.md
  11. 130
      subjects/devops/road-to-dofd/README.md
  12. 78
      subjects/devops/road-to-dofd/audit/README.md

431
subjects/devops/cloud-design/README.md

@ -1,215 +1,216 @@
## Cloud-Design
<center>
<img
src="./resources/cloud-design.jpg?raw=true" style = "width: 600px
!important; height: 600px !important;"/>
</center>
### Objective
The objective of this project is to challenge your understanding of DevOps and
cloud technologies by providing hands-on experience in deploying and managing a
microservices-based application on the Amazon Web Services (AWS) cloud
platform. Your mission is to:
Set up and configure an AWS environment for deploying microservices. Deploy the
provided microservices' application to the AWS environment. Implement
monitoring, logging, and scaling to ensure that the application runs
efficiently. Implement security measures, such as securing the databases and
making private resources accessible only from the Amazon Virtual Private Cloud
(VPC). Incorporate managed authentication for publicly accessible applications
using *AWS Cognito* or a similar service. Optimize the application to handle
varying workloads and unexpected events.
### Hints
Before starting this project, you should know the following:
- Basic DevOps concepts and practices.
- Familiarity with containerization and orchestration tools, such as Docker and
Kubernetes.
- Understanding of AWS cloud platform.
- Familiarity with Terraform as a Infrastructure as Code (IaC) tools.
- Knowledge of monitoring and logging tools, such as Prometheus, Grafana, and
ELK stack.
> Any lack of understanding of the concepts of this project may affect the
> difficulty of future projects, take your time to understand all concepts.
> Be curious and never stop searching!
### Role play
To enhance the learning experience and assess your knowledge, a role play
question session will be included as part of the Cloud-Design Project. This
section will involve answering a series of questions in a simulated real-world
scenario where you assume the role of a Cloud engineer explaining your solution
to a team or stakeholder.
The goal of the role play question session is to:
- Assess your understanding of the concepts and technologies used in the
project.
- Test your ability to communicate effectively and explain your decisions.
- Challenge you to think critically about your solution and consider
alternative approaches.
Prepare for a role play question session where you will assume the role of a
Cloud engineer presenting your solution to your team or a stakeholder. You
should be ready to answer questions and provide explanations about your
decisions, architecture, and implementation.
### Architecture
By using your solutions in your previous projects `crud-master`,
`play-with-containers`, and `orchestrator` you have to design and deploy the
infrastructure on AWS respecting the project requirements, consisting of the
following components:
- `inventory-database container` is a PostgreSQL database server that contains
your inventory database, it must be accessible via port `5432`.
- `billing-database container` is a PostgreSQL database server that contains
your billing database, it must be accessible via port `5432`.
- `inventory-app container` is a server that contains your
inventory-app code running and connected to the inventory database and
accessible via port `8080`.
- `billing-app container` is a server that contains your billing-app
code running and connected to the billing database and consuming the messages
from the RabbitMQ queue, and it can be accessed via port `8080`.
- `RabbitMQ container` is a RabbitMQ server that contains the queue.
- `api-gateway-app container` is a server that contains your
API Gateway code running and forwarding the requests to the other
services, and it's accessible via port `3000`.
Design the architecture for your cloud-based microservices' application. You
are free to choose the services and architectural patterns that best suit your
needs, as long as they meet the project requirements and remain within a
reasonable cost range. Consider the following when designing your architecture:
1. `Scalability`: Ensure that your architecture can handle varying workloads
and can scale up or down as needed. AWS offers services like Auto Scaling
that can be used to achieve this.
2. `Availability`: Design your architecture to be fault-tolerant and maintain
high availability, even in the event of component failures.
3. `Security`: Incorporate security best practices into your architecture, such
as encrypting data at rest and in transit, using private networks, and
securing API endpoints. Also, ensure that the databases and private
resources are accessible only from the AWS VPC and use AWS managed
authentication for publicly accessible applications.
4. `Cost-effectiveness`: Be mindful of the costs associated with the services
and resources you select. Aim to design a cost-effective architecture
without compromising performance, security, or scalability.
5. `Simplicity`: Keep your architecture as simple as possible, while still
meeting the project requirements. Avoid overcomplicating the design with
unnecessary components or services.
### Cost management:
1. `Understand the pricing model`: Familiarize yourself with the pricing model
of the cloud provider and services you are using. Be aware of any free
tiers, usage limits, and pay-as-you-go pricing structures.
2. `Monitor your usage`: Regularly check your cloud provider's billing
dashboard to keep track of your usage and spending. Set up billing alerts to
notify you when your spending exceeds a certain threshold.
3. `Clean up resources`: Remember to delete or stop any resources that you no
longer need, such as virtual machines, storage services, and load balancers.
This will help you avoid ongoing charges for idle resources.
4. `Optimize resource allocation`: Use the appropriate resource sizes for your
needs and experiment with different configurations to find the most
cost-effective solution. Consider using spot instances, reserved instances,
or committed use contracts to save on costs, if applicable.
5. `Leverage cost management tools`: Many cloud providers offer cost management
tools and services to help you optimize your spending. Use these tools to
analyze your usage patterns and identify opportunities for cost savings.
> By being aware of your cloud usage and proactively managing your resources,
> you can avoid unexpected costs and make the most of your cloud environment.
> Remember that the responsibility for cost management lies with you, and it is
> crucial to stay vigilant and proactive throughout the project.
### Infrastructure as Code:
Provision the necessary resources for your AWS environment using Terraform as
an Infrastructure as Code (IaC) tools. This includes setting up EC2 instances,
containers, networking components, and storage services using AWS S3 or other
similar services.
### Containerize the microservices:
Use Docker to build container images for each microservice. Make sure to
optimize the Dockerfile for each service to reduce the image size and build
time.
### Deployment:
Deploy the containerized microservices on AWS using an orchestration tool like
AWS ECS or EKS. Ensure that the services are load-balanced (consider using AWS
Elastic Load Balancer) and can communicate with each other securely.
<!--TODO: add link to solution for orchestrator-->
> Use [this solution]() to kick start you Kubernetes deployment.
### Monitoring and logging:
Set up monitoring and logging tools to track the performance and health of your
application. Use tools like CloudWatch, Prometheus, Grafana, and ELK stack to
visualize metrics and logs.
### Optimization:
Implement auto-scaling policies to handle varying workloads and ensure high
availability. Test the application under different load scenarios and adjust
the resources accordingly.
### Security:
Implement security best practices such as using AWS Certificate Manager for
HTTPS, securing API endpoints with Amazon API Gateway, regularly scanning for
vulnerabilities with AWS Inspector, and implementing managed authentication for
publicly accessible applications with AWS Cognito or similar service. Ensure
that the databases and private resources are secure and accessible only from
the AWS VPC.
### Documentation
Create a `README.md` file that provides comprehensive documentation for your
architecture, which must include well-structured diagrams, thorough
descriptions of components, and an explanation of your design decisions,
presented in a clear and concise manner. Make sure it contains all the
necessary information about the solution (prerequisites, setup, configuration,
usage, ...). This file must be submitted as part of the solution for the
project.
### Bonus
If you complete the mandatory part successfully and you still have free time,
you can implement anything that you feel deserves to be a bonus, for example:
- Use your own `crud-master`, `play-with-containers`, and `orchestrator`
solution instead of the provided ones.
- Use `Function as a Service (FaaS)` in your solution.
- Use `Content Delivery Network (CDN)` to optimize your solution.
- Implementing alert systems to ensure your application runs smoothly.
Challenge yourself!
### Submission and audit
Upon completing this project, you should submit the following:
- Your documentation in the `README.md` file.
- Source code for the microservices and any scripts used for deployment.
- Configuration files for your Infrastructure as Code (IaC), containerization,
and orchestration tools.
## Cloud-Design
<center>
<img
src="./resources/cloud-design.jpg?raw=true" style = "width: 600px
!important; height: 600px !important;"/>
</center>
### Objective
The objective of this project is to challenge your understanding of DevOps and
cloud technologies by providing hands-on experience in deploying and managing a
microservices-based application on the Amazon Web Services (AWS) cloud
platform. Your mission is to:
Set up and configure an AWS environment for deploying microservices. Deploy the
provided microservices' application to the AWS environment. Implement
monitoring, logging, and scaling to ensure that the application runs
efficiently. Implement security measures, such as securing the databases and
making private resources accessible only from the Amazon Virtual Private Cloud
(VPC). Incorporate managed authentication for publicly accessible applications
using _AWS Cognito_ or a similar service. Optimize the application to handle
varying workloads and unexpected events.
### Hints
Before starting this project, you should know the following:
- Basic DevOps concepts and practices.
- Familiarity with containerization and orchestration tools, such as Docker and
Kubernetes.
- Understanding of AWS cloud platform.
- Familiarity with Terraform as a Infrastructure as Code (IaC) tools.
- Knowledge of monitoring and logging tools, such as Prometheus, Grafana, and
ELK stack.
> Any lack of understanding of the concepts of this project may affect the
> difficulty of future projects, take your time to understand all concepts.
> Be curious and never stop searching!
### Role play
To enhance the learning experience and assess your knowledge, a role play
question session will be included as part of the Cloud-Design Project. This
section will involve answering a series of questions in a simulated real-world
scenario where you assume the role of a Cloud engineer explaining your solution
to a team or stakeholder.
The goal of the role play question session is to:
- Assess your understanding of the concepts and technologies used in the
project.
- Test your ability to communicate effectively and explain your decisions.
- Challenge you to think critically about your solution and consider
alternative approaches.
Prepare for a role play question session where you will assume the role of a
Cloud engineer presenting your solution to your team or a stakeholder. You
should be ready to answer questions and provide explanations about your
decisions, architecture, and implementation.
### Architecture
By using your solutions in your previous projects `crud-master`,
`play-with-containers`, and `orchestrator` you have to design and deploy the
infrastructure on AWS respecting the project requirements, consisting of the
following components:
- `inventory-database container` is a PostgreSQL database server that contains
your inventory database, it must be accessible via port `5432`.
- `billing-database container` is a PostgreSQL database server that contains
your billing database, it must be accessible via port `5432`.
- `inventory-app container` is a server that contains your
inventory-app code running and connected to the inventory database and
accessible via port `8080`.
- `billing-app container` is a server that contains your billing-app
code running and connected to the billing database and consuming the messages
from the RabbitMQ queue, and it can be accessed via port `8080`.
- `RabbitMQ container` is a RabbitMQ server that contains the queue.
- `api-gateway-app container` is a server that contains your
API Gateway code running and forwarding the requests to the other
services, and it's accessible via port `3000`.
Design the architecture for your cloud-based microservices' application. You
are free to choose the services and architectural patterns that best suit your
needs, as long as they meet the project requirements and remain within a
reasonable cost range. Consider the following when designing your architecture:
1. `Scalability`: Ensure that your architecture can handle varying workloads
and can scale up or down as needed. AWS offers services like Auto Scaling
that can be used to achieve this.
2. `Availability`: Design your architecture to be fault-tolerant and maintain
high availability, even in the event of component failures.
3. `Security`: Incorporate security best practices into your architecture, such
as encrypting data at rest and in transit, using private networks, and
securing API endpoints. Also, ensure that the databases and private
resources are accessible only from the AWS VPC and use AWS managed
authentication for publicly accessible applications.
4. `Cost-effectiveness`: Be mindful of the costs associated with the services
and resources you select. Aim to design a cost-effective architecture
without compromising performance, security, or scalability.
5. `Simplicity`: Keep your architecture as simple as possible, while still
meeting the project requirements. Avoid overcomplicating the design with
unnecessary components or services.
### Cost management:
1. `Understand the pricing model`: Familiarize yourself with the pricing model
of the cloud provider and services you are using. Be aware of any free
tiers, usage limits, and pay-as-you-go pricing structures.
2. `Monitor your usage`: Regularly check your cloud provider's billing
dashboard to keep track of your usage and spending. Set up billing alerts to
notify you when your spending exceeds a certain threshold.
3. `Clean up resources`: Remember to delete or stop any resources that you no
longer need, such as virtual machines, storage services, and load balancers.
This will help you avoid ongoing charges for idle resources.
4. `Optimize resource allocation`: Use the appropriate resource sizes for your
needs and experiment with different configurations to find the most
cost-effective solution. Consider using spot instances, reserved instances,
or committed use contracts to save on costs, if applicable.
5. `Leverage cost management tools`: Many cloud providers offer cost management
tools and services to help you optimize your spending. Use these tools to
analyze your usage patterns and identify opportunities for cost savings.
> By being aware of your cloud usage and proactively managing your resources,
> you can avoid unexpected costs and make the most of your cloud environment.
> Remember that the responsibility for cost management lies with you, and it is
> crucial to stay vigilant and proactive throughout the project.
### Infrastructure as Code:
Provision the necessary resources for your AWS environment using Terraform as
an Infrastructure as Code (IaC) tools. This includes setting up EC2 instances,
containers, networking components, and storage services using AWS S3 or other
similar services.
### Containerize the microservices:
Use Docker to build container images for each microservice. Make sure to
optimize the Dockerfile for each service to reduce the image size and build
time.
### Deployment:
Deploy the containerized microservices on AWS using an orchestration tool like
AWS ECS or EKS. Ensure that the services are load-balanced (consider using AWS
Elastic Load Balancer) and can communicate with each other securely.
<!--TODO: add link to solution for orchestrator-->
> Use [this solution]() to kick start you Kubernetes deployment.
### Monitoring and logging:
Set up monitoring and logging tools to track the performance and health of your
application. Use tools like CloudWatch, Prometheus, Grafana, and ELK stack to
visualize metrics and logs.
### Optimization:
Implement auto-scaling policies to handle varying workloads and ensure high
availability. Test the application under different load scenarios and adjust
the resources accordingly.
### Security:
Implement security best practices such as using AWS Certificate Manager for
HTTPS, securing API endpoints with Amazon API Gateway, regularly scanning for
vulnerabilities with AWS Inspector, and implementing managed authentication for
publicly accessible applications with AWS Cognito or similar service. Ensure
that the databases and private resources are secure and accessible only from
the AWS VPC.
### Documentation
Create a `README.md` file that provides comprehensive documentation for your
architecture, which must include well-structured diagrams, thorough
descriptions of components, and an explanation of your design decisions,
presented in a clear and concise manner. Make sure it contains all the
necessary information about the solution (prerequisites, setup, configuration,
usage, ...). This file must be submitted as part of the solution for the
project.
### Bonus
If you complete the mandatory part successfully and you still have free time,
you can implement anything that you feel deserves to be a bonus, for example:
- Use your own `crud-master`, `play-with-containers`, and `orchestrator`
solution instead of the provided ones.
- Use `Function as a Service (FaaS)` in your solution.
- Use `Content Delivery Network (CDN)` to optimize your solution.
- Implementing alert systems to ensure your application runs smoothly.
Challenge yourself!
### Submission and audit
Upon completing this project, you should submit the following:
- Your documentation in the `README.md` file.
- Source code for the microservices and any scripts used for deployment.
- Configuration files for your Infrastructure as Code (IaC), containerization,
and orchestration tools.

204
subjects/devops/cloud-design/audit/README.md

@ -1,102 +1,102 @@
#### General
##### Check the Repo content.
Files that must be inside the repository:
- Detailed documentation in the `README.md` file.
- Source code for the microservices and scripts required for deployment.
- Configuration files for AWS Infrastructure as Code (IaC), containerization, and orchestration tools.
###### Are all the required files present?
##### Play the role of a stakeholder.
Organize a simulated scenario where the students take on the role of AWS Cloud engineers and explain their solution to a team or stakeholder. Evaluate their grasp of the concepts and technologies used in the project, their communication efficacy, and their critical thinking about their solution.
Suggested roleplay questions include:
- What is the cloud and its associated benefits?
- Why is deploying the solution in the cloud preferred over on-premises?
- How would you differentiate between public, private, and hybrid cloud?
- What drove your decision to select AWS for this project, and what factors did you consider?
- Can you describe your microservices application's AWS-based architecture and the interaction between its components?
- How did you manage and optimize the cost of your AWS solution?
- What measures did you implement to ensure application security on AWS, and what AWS security best practices did you adhere to?
- What AWS monitoring and logging tools did you utilize, and how did they assist in identifying and troubleshooting application issues?
- Can you describe the AWS auto-scaling policies you implemented and how they help your application accommodate varying workloads?
- How did you optimize Docker images for each microservice, and how did it influence build times and image sizes?
- If you had to redo this project, what modifications would you make to your approach or the technologies you used?
- How can your AWS solution be expanded or altered to cater to future requirements like adding new microservices or migrating to a different cloud provider?
- What challenges did you face during the project and how did you address them?
- How did you ensure your documentation's clarity and completeness, and what measures did you take to make it easily understandable and maintainable?
###### Was the students able to answer all the questions correctly?
###### Did the students demonstrate a thorough understanding of the concepts and technologies used in the project?
###### Were the students able to communicate effectively and justify their decisions?
###### Could the students critically evaluate their solution and consider alternative strategies?
##### Review the Architecture Design.
Review the student's architecture design, ensuring that it meets the project requirements:
1. `Scalability`: Does the architecture utilize AWS services to manage varying workloads and scale as required?
2. `Availability`: Design the architecture to be fault-tolerant and maintain high availability, even during component failures.
3. `Security`: Does the architecture integrate AWS security best practices, such as data encryption, use of AWS VPC, and secure API endpoints with managed authentication?
4. `Cost-effectiveness`: Is the architecture designed to be cost-effective on AWS without compromising performance, security, or scalability?
5. `Simplicity`: Is the AWS architecture straightforward and free of unnecessary complexity while still fulfilling project requirements?
###### Did the architecture design and choice of services align with the project requirements?
###### Did the students have the ability to design a cost-effective architecture that meets the project requirements?
##### Check the student documentation in the `README.md` file.
###### Does the `README.md` file contain all the necessary information about the solution (prerequisites, setup, configuration, usage, ...)?
###### Is the documentation provided by the student clear and complete, including well-structured diagrams and thorough descriptions?
##### Verify the deployment.
###### Are all the microservices running as expected in the cloud environment, with no errors or connectivity issues?
###### Is the load balancing configured correctly, effectively distributing traffic across the services?
###### Are the microservices communicating with each other securely, using proper authentication and encryption methods?
##### Evaluate the infrastructure setup.
###### Are `Terraform` used effectively to provision and manage resources in the cloud environment?
###### Does the infrastructure setup follow the architecture design and the project requirements?
##### Assess containerization and orchestration.
###### Are the Dockerfiles optimized for efficient container builds?
###### Is the orchestration setup (e.g., Kubernetes manifests or AWS ECS task definitions) configured correctly?
##### Evaluate monitoring and logging.
###### Do monitoring and logging dashboards provide useful insights into the application performance and health?
##### Assess optimization efforts.
###### Are the auto-scaling policies configured correctly to handle varying workloads?
###### Does the application and resource allocation remain efficient under different load scenarios?
##### Check security best practices.
###### Has the student implemented security best practices, such as using HTTPS, securing API endpoints, and regularly scanning for vulnerabilities?
#### Bonus
###### +Did the student used his/her own `orchestrator` solution instead of the provided one?
###### +Did the student add any optional bonus?
###### +Is this project an outstanding project?
#### General
##### Check the Repo content.
Files that must be inside the repository:
- Detailed documentation in the `README.md` file.
- Source code for the microservices and scripts required for deployment.
- Configuration files for AWS Infrastructure as Code (IaC), containerization, and orchestration tools.
###### Are all the required files present?
##### Play the role of a stakeholder.
Organize a simulated scenario where the students take on the role of AWS Cloud engineers and explain their solution to a team or stakeholder. Evaluate their grasp of the concepts and technologies used in the project, their communication efficacy, and their critical thinking about their solution.
Suggested roleplay questions include:
- What is the cloud and its associated benefits?
- Why is deploying the solution in the cloud preferred over on-premises?
- How would you differentiate between public, private, and hybrid cloud?
- What drove your decision to select AWS for this project, and what factors did you consider?
- Can you describe your microservices application's AWS-based architecture and the interaction between its components?
- How did you manage and optimize the cost of your AWS solution?
- What measures did you implement to ensure application security on AWS, and what AWS security best practices did you adhere to?
- What AWS monitoring and logging tools did you utilize, and how did they assist in identifying and troubleshooting application issues?
- Can you describe the AWS auto-scaling policies you implemented and how they help your application accommodate varying workloads?
- How did you optimize Docker images for each microservice, and how did it influence build times and image sizes?
- If you had to redo this project, what modifications would you make to your approach or the technologies you used?
- How can your AWS solution be expanded or altered to cater to future requirements like adding new microservices or migrating to a different cloud provider?
- What challenges did you face during the project and how did you address them?
- How did you ensure your documentation's clarity and completeness, and what measures did you take to make it easily understandable and maintainable?
###### Was the students able to answer all the questions correctly?
###### Did the students demonstrate a thorough understanding of the concepts and technologies used in the project?
###### Were the students able to communicate effectively and justify their decisions?
###### Could the students critically evaluate their solution and consider alternative strategies?
##### Review the Architecture Design.
Review the student's architecture design, ensuring that it meets the project requirements:
1. `Scalability`: Does the architecture utilize AWS services to manage varying workloads and scale as required?
2. `Availability`: Design the architecture to be fault-tolerant and maintain high availability, even during component failures.
3. `Security`: Does the architecture integrate AWS security best practices, such as data encryption, use of AWS VPC, and secure API endpoints with managed authentication?
4. `Cost-effectiveness`: Is the architecture designed to be cost-effective on AWS without compromising performance, security, or scalability?
5. `Simplicity`: Is the AWS architecture straightforward and free of unnecessary complexity while still fulfilling project requirements?
###### Did the architecture design and choice of services align with the project requirements?
###### Did the students have the ability to design a cost-effective architecture that meets the project requirements?
##### Check the student documentation in the `README.md` file.
###### Does the `README.md` file contain all the necessary information about the solution (prerequisites, setup, configuration, usage, ...)?
###### Is the documentation provided by the student clear and complete, including well-structured diagrams and thorough descriptions?
##### Verify the deployment.
###### Are all the microservices running as expected in the cloud environment, with no errors or connectivity issues?
###### Is the load balancing configured correctly, effectively distributing traffic across the services?
###### Are the microservices communicating with each other securely, using proper authentication and encryption methods?
##### Evaluate the infrastructure setup.
###### Are `Terraform` used effectively to provision and manage resources in the cloud environment?
###### Does the infrastructure setup follow the architecture design and the project requirements?
##### Assess containerization and orchestration.
###### Are the Dockerfiles optimized for efficient container builds?
###### Is the orchestration setup (e.g., Kubernetes manifests or AWS ECS task definitions) configured correctly?
##### Evaluate monitoring and logging.
###### Do monitoring and logging dashboards provide useful insights into the application performance and health?
##### Assess optimization efforts.
###### Are the auto-scaling policies configured correctly to handle varying workloads?
###### Does the application and resource allocation remain efficient under different load scenarios?
##### Check security best practices.
###### Has the student implemented security best practices, such as using HTTPS, securing API endpoints, and regularly scanning for vulnerabilities?
#### Bonus
###### +Did the student used his/her own `orchestrator` solution instead of the provided one?
###### +Did the student add any optional bonus?
###### +Is this project an outstanding project?

482
subjects/devops/code-keeper/README.md

@ -1,241 +1,241 @@
## Code-Keeper
<center>
<img
src="./resources/cloud-design.jpg?raw=true" style = "width: 400px
!important; height: 400px !important;"/>
</center>
### Objective
In this project, you will create a complete pipeline to scan and deploy a
microservices-based application. Your challenge is to design, implement, and
optimize a pipeline that incorporates industry best practices for continuous
integration, continuous deployment, and security. Your mission is to:
- Set up a source control system for the microservices source code and the
infrastructure configuration.
- Create a Pipeline to `create`, `update`, or `delete` the infrastructure for
the staging and production environment.
- Create a `continuous integration (CI)` pipeline to build, test, and scan the
source code.
- Create a `continuous deployment (CD)` pipeline to deploy the application to a
staging and production environment.
- Ensure the `security` and `reliability` of the application throughout the
pipeline stages.
### Prerequisites
To complete this project, you should have a good understanding of the
following:
- GitLab and GitLab CI
- Ansible as a configuration management and automation tool
- Docker and containerization
- Terraform as an Infrastructure as Code (IaC)
- Cloud platforms (e.g., AWS, Azure, or Google Cloud)
### Tips
- Spend time on the theory before rushing into the practice.
- Read the official documentation.
> Any lack of understanding of the concepts of this project may affect the
> difficulty of future projects, take your time to understand all concepts.
> Be curious and never stop searching!
### Role play
To further enhance the learning experience and assess the student's knowledge
of DevOps concepts and practices, we will include a role play question session
as part of the project. This exercise will require students to apply their
knowledge in various real-life scenarios, helping them to solidify their
understanding of the material and prepare for real-world situations.
The goal of the role play question session is to:
- Assess your understanding of the concepts and technologies used in the
project.
- Test your ability to communicate effectively and explain your decisions.
- Challenge you to think critically about your solution and consider
alternative approaches.
Prepare for a role play question session where you will assume the role of a
DevOps engineer presenting your solution to your team or a stakeholder. You
should be ready to answer questions and provide explanations about your
decisions, architecture, and implementation.
### Deploy GitLab and Runners for Pipeline Execution
You must deploy a `GitLab` instance using `Ansible`. This hands-on exercise
will help you gain a deeper understanding of `Ansible` as a configuration
management and automation tool while also giving you experience in deploying
and configuring `GitLab`.
1. Create an `Ansible` playbook to deploy and configure a `GitLab` instance.
The playbook should automate the installation of `GitLab` and any required
dependencies. It should also configure `GitLab` settings such as user
authentication, project settings, and CI/CD settings.
2. Deploy a `GitLab` instance on a cloud platform (e.g., AWS, Azure, or Google
Cloud) or in a local environment using the `Ansible` playbook. Ensure that
the instance is accessible to all team members and is configured to support
collaboration and code reviews.
3. Configure the `GitLab` instance to support `CI/CD pipelines` by setting up
`GitLab` Runners and integrating them with your existing pipeline. Update
your pipeline configuration to utilize `GitLab CI/CD` features and execute
tasks on the deployed Runners.
> You will need to demonstrate the successful deployment and configuration of
> `GitLab` using `Ansible` in the audit.
### The pipelines
You are a DevOps engineer at a company that is transitioning to an Agile
approach and wants to achieve high delivery for their microservices'
architecture. As the DevOps engineer, your manager has tasked you with creating
a pipeline that supports Agile methodologies and enables faster, more
consistent deployments of the microservices.
![code-keeper](resources/code-keeper.png)
1. You will use your `crud-master` source code and `cloud-design`
infrastructure, to create a complete pipeline for the following
applications:
- `Inventory application` is a server that contains your inventory-app code
running and connected to the inventory database.
- `billing application` is a server that contains your billing-app code running
and connected to the billing database and consuming the messages from the
RabbitMQ queue.
- `api-gateway application` is a server that contains your API gateway code
running and forwarding the requests to the other services.
> Each application must exist in a single repository.
2. You must provision your `cloud-design` infrastructure for two environments
on a cloud platform (e.g., AWS, Azure, or Google Cloud) using `Terraform`.
- `Production Environment`: The live infrastructure where the software is
deployed and used by end-users, requires stable and thoroughly tested updates
to ensure optimal performance and functionality.
- `Staging Environment`: A replica of the production environment used for
testing and validating software updates in a controlled setting before
deployment to the live system.
> The two environments should be similar in design, resources, and services
> used! Your infrastructure configuration must exist in an independent
> repository with a configured pipeline!
The pipeline should include the following stages:
- `Init`: Initialize the Terraform working directory and backend. This job
downloads the required provider plugins and sets up the backend for storing
the Terraform state.
- `Validate`: Validate the Terraform configuration files to ensure correct
syntax and adherence to best practices. This helps catch any issues early in
the pipeline.
- `Plan`: Generate an execution plan that shows the changes to be made to your
infrastructure, including the resources that will be created, updated, or
deleted. This job provides a preview of the changes and enables you to review
them before applying.
- `Apply to Staging`: Apply the Terraform configuration to `create`, `update`,
or `delete` the resources as specified in the execution plan. This job
provisions and modifies the infrastructure in the staging environment.
- `Approval`: Require manual approval to proceed with deployment to the
`production environment`. This step should involve stakeholders and ensure
the application is ready for production.
- `Apply to Production`: Apply the Terraform configuration to `create`,
`update`, or `delete` the resources as specified in the execution plan. This
job provisions and modifies the infrastructure in the production environment.
3. Design and implement a `CI pipeline` for each repository that will be
triggered on every code push or pull request. The pipeline should include
the following stages:
- `Build`: Compile and package the application.
- `Test`: Run unit and integration tests to ensure code quality and
functionality.
- `Scan`: Analyze the source code and dependencies for security vulnerabilities
and coding issues. Consider using tools such as `SonarQube`, `Snyk`, or
`WhiteSource`.
- `Containerization`: Package the applications into Docker images using a
Dockerfile, and push the images to a container registry (e.g., Docker Hub,
Google Container Registry, or AWS ECR).
4. Design and implement a `CD pipeline` that will be triggered after the `CI
pipeline` has been completed. The pipeline should include the following stages:
- `Deploy to Staging`: Deploy the application to a `staging environment` for
further testing and validation.
- `Approval`: Require manual approval to proceed with deployment to the
`production environment`. This step should involve stakeholders and ensure
the application is ready for production.
- `Deploy to Production`: Deploy the application to the `production
environment`, ensuring zero downtime and a smooth rollout.
> Each repository must have a pipeline!
> Any modification in the application's source code must rebuild and redeploy
> the new version to the `Staging Environment` and then to the `Production
> Environment` after manual approval.
### Cybersecurity
Your pipelines and infrastructure should adhere to the following cybersecurity
guidelines:
- `Restrict triggers to protected branches`: Prevent unauthorized users from
deploying or tampering by triggering pipelines only on protected branches,
controlling access, and minimizing risk.
- `Separate credentials from code`: Avoid storing credentials in application
code or infrastructure files. Use secure methods like secret management tools
or environment variables to prevent exposure or unauthorized access.
- `Apply the least privilege principle`: Limit user and service access to the
minimum required, reducing potential damage in case of breaches or
compromised credentials.
- `Update dependencies and tools regularly`: Minimize security vulnerabilities
by keeping dependencies and pipeline tools updated. Automate updates and
monitor for security advisories and patches.
### Documentation
You must push a `README.md` file containing full documentation of your solution
(prerequisites, configuration, setup, usage, ...).
### Bonus
If you complete the mandatory part successfully and you still have free time,
you can implement anything that you feel deserves to be a bonus, for example:
- Security scan for the infrastructure configuration using `tfsec`.
- Add `Infracost` in your infrastructure pipeline to estimate the
infrastructure cost.
- Use `Terragrunt` to create multiple Environments.
Challenge yourself!
### Submission and audit
You must submit:
- CI/CD pipeline configuration files, scripts, and any other required
artifacts.
- An Ansible playbook and used scripts for deploying and configuring a GitLab
instance.
- A well-documented README file that explains the pipeline design, the tools
used, and how to set up and use the pipeline.
Your Solution must be running and your users and applications repository and
CI/CD must be configured correctly for the audit session.
> In the audit you will be asked different questions about the concepts and the
> practice of this project, prepare yourself!
## Code-Keeper
<center>
<img
src="./resources/cloud-design.jpg?raw=true" style = "width: 400px
!important; height: 400px !important;"/>
</center>
### Objective
In this project, you will create a complete pipeline to scan and deploy a
microservices-based application. Your challenge is to design, implement, and
optimize a pipeline that incorporates industry best practices for continuous
integration, continuous deployment, and security. Your mission is to:
- Set up a source control system for the microservices source code and the
infrastructure configuration.
- Create a Pipeline to `create`, `update`, or `delete` the infrastructure for
the staging and production environment.
- Create a `continuous integration (CI)` pipeline to build, test, and scan the
source code.
- Create a `continuous deployment (CD)` pipeline to deploy the application to a
staging and production environment.
- Ensure the `security` and `reliability` of the application throughout the
pipeline stages.
### Prerequisites
To complete this project, you should have a good understanding of the
following:
- GitLab and GitLab CI
- Ansible as a configuration management and automation tool
- Docker and containerization
- Terraform as an Infrastructure as Code (IaC)
- Cloud platforms (e.g., AWS, Azure, or Google Cloud)
### Tips
- Spend time on the theory before rushing into the practice.
- Read the official documentation.
> Any lack of understanding of the concepts of this project may affect the
> difficulty of future projects, take your time to understand all concepts.
> Be curious and never stop searching!
### Role play
To further enhance the learning experience and assess the student's knowledge
of DevOps concepts and practices, we will include a role play question session
as part of the project. This exercise will require students to apply their
knowledge in various real-life scenarios, helping them to solidify their
understanding of the material and prepare for real-world situations.
The goal of the role play question session is to:
- Assess your understanding of the concepts and technologies used in the
project.
- Test your ability to communicate effectively and explain your decisions.
- Challenge you to think critically about your solution and consider
alternative approaches.
Prepare for a role play question session where you will assume the role of a
DevOps engineer presenting your solution to your team or a stakeholder. You
should be ready to answer questions and provide explanations about your
decisions, architecture, and implementation.
### Deploy GitLab and Runners for Pipeline Execution
You must deploy a `GitLab` instance using `Ansible`. This hands-on exercise
will help you gain a deeper understanding of `Ansible` as a configuration
management and automation tool while also giving you experience in deploying
and configuring `GitLab`.
1. Create an `Ansible` playbook to deploy and configure a `GitLab` instance.
The playbook should automate the installation of `GitLab` and any required
dependencies. It should also configure `GitLab` settings such as user
authentication, project settings, and CI/CD settings.
2. Deploy a `GitLab` instance on a cloud platform (e.g., AWS, Azure, or Google
Cloud) or in a local environment using the `Ansible` playbook. Ensure that
the instance is accessible to all team members and is configured to support
collaboration and code reviews.
3. Configure the `GitLab` instance to support `CI/CD pipelines` by setting up
`GitLab` Runners and integrating them with your existing pipeline. Update
your pipeline configuration to utilize `GitLab CI/CD` features and execute
tasks on the deployed Runners.
> You will need to demonstrate the successful deployment and configuration of
> `GitLab` using `Ansible` in the audit.
### The pipelines
You are a DevOps engineer at a company that is transitioning to an Agile
approach and wants to achieve high delivery for their microservices'
architecture. As the DevOps engineer, your manager has tasked you with creating
a pipeline that supports Agile methodologies and enables faster, more
consistent deployments of the microservices.
![code-keeper](resources/code-keeper.png)
1. You will use your `crud-master` source code and `cloud-design`
infrastructure, to create a complete pipeline for the following
applications:
- `Inventory application` is a server that contains your inventory-app code
running and connected to the inventory database.
- `billing application` is a server that contains your billing-app code running
and connected to the billing database and consuming the messages from the
RabbitMQ queue.
- `api-gateway application` is a server that contains your API gateway code
running and forwarding the requests to the other services.
> Each application must exist in a single repository.
2. You must provision your `cloud-design` infrastructure for two environments
on a cloud platform (e.g., AWS, Azure, or Google Cloud) using `Terraform`.
- `Production Environment`: The live infrastructure where the software is
deployed and used by end-users, requires stable and thoroughly tested updates
to ensure optimal performance and functionality.
- `Staging Environment`: A replica of the production environment used for
testing and validating software updates in a controlled setting before
deployment to the live system.
> The two environments should be similar in design, resources, and services
> used! Your infrastructure configuration must exist in an independent
> repository with a configured pipeline!
The pipeline should include the following stages:
- `Init`: Initialize the Terraform working directory and backend. This job
downloads the required provider plugins and sets up the backend for storing
the Terraform state.
- `Validate`: Validate the Terraform configuration files to ensure correct
syntax and adherence to best practices. This helps catch any issues early in
the pipeline.
- `Plan`: Generate an execution plan that shows the changes to be made to your
infrastructure, including the resources that will be created, updated, or
deleted. This job provides a preview of the changes and enables you to review
them before applying.
- `Apply to Staging`: Apply the Terraform configuration to `create`, `update`,
or `delete` the resources as specified in the execution plan. This job
provisions and modifies the infrastructure in the staging environment.
- `Approval`: Require manual approval to proceed with deployment to the
`production environment`. This step should involve stakeholders and ensure
the application is ready for production.
- `Apply to Production`: Apply the Terraform configuration to `create`,
`update`, or `delete` the resources as specified in the execution plan. This
job provisions and modifies the infrastructure in the production environment.
3. Design and implement a `CI pipeline` for each repository that will be
triggered on every code push or pull request. The pipeline should include
the following stages:
- `Build`: Compile and package the application.
- `Test`: Run unit and integration tests to ensure code quality and
functionality.
- `Scan`: Analyze the source code and dependencies for security vulnerabilities
and coding issues. Consider using tools such as `SonarQube`, `Snyk`, or
`WhiteSource`.
- `Containerization`: Package the applications into Docker images using a
Dockerfile, and push the images to a container registry (e.g., Docker Hub,
Google Container Registry, or AWS ECR).
4. Design and implement a `CD pipeline` that will be triggered after the `CI
pipeline` has been completed. The pipeline should include the following stages:
- `Deploy to Staging`: Deploy the application to a `staging environment` for
further testing and validation.
- `Approval`: Require manual approval to proceed with deployment to the
`production environment`. This step should involve stakeholders and ensure
the application is ready for production.
- `Deploy to Production`: Deploy the application to the `production
environment`, ensuring zero downtime and a smooth rollout.
> Each repository must have a pipeline!
> Any modification in the application's source code must rebuild and redeploy
> the new version to the `Staging Environment` and then to the `Production
Environment` after manual approval.
### Cybersecurity
Your pipelines and infrastructure should adhere to the following cybersecurity
guidelines:
- `Restrict triggers to protected branches`: Prevent unauthorized users from
deploying or tampering by triggering pipelines only on protected branches,
controlling access, and minimizing risk.
- `Separate credentials from code`: Avoid storing credentials in application
code or infrastructure files. Use secure methods like secret management tools
or environment variables to prevent exposure or unauthorized access.
- `Apply the least privilege principle`: Limit user and service access to the
minimum required, reducing potential damage in case of breaches or
compromised credentials.
- `Update dependencies and tools regularly`: Minimize security vulnerabilities
by keeping dependencies and pipeline tools updated. Automate updates and
monitor for security advisories and patches.
### Documentation
You must push a `README.md` file containing full documentation of your solution
(prerequisites, configuration, setup, usage, ...).
### Bonus
If you complete the mandatory part successfully and you still have free time,
you can implement anything that you feel deserves to be a bonus, for example:
- Security scan for the infrastructure configuration using `tfsec`.
- Add `Infracost` in your infrastructure pipeline to estimate the
infrastructure cost.
- Use `Terragrunt` to create multiple Environments.
Challenge yourself!
### Submission and audit
You must submit:
- CI/CD pipeline configuration files, scripts, and any other required
artifacts.
- An Ansible playbook and used scripts for deploying and configuring a GitLab
instance.
- A well-documented README file that explains the pipeline design, the tools
used, and how to set up and use the pipeline.
Your Solution must be running and your users and applications repository and
CI/CD must be configured correctly for the audit session.
> In the audit you will be asked different questions about the concepts and the
> practice of this project, prepare yourself!

244
subjects/devops/code-keeper/audit/README.md

@ -1,122 +1,122 @@
#### General
##### Check the Repo content:
Files that must be inside the repository:
- CI/CD pipeline configuration files, scripts, and any other required artifacts.
- An Ansible playbook and used scripts for deploying and configuring a GitLab instance.
- A well-documented README file that explains the pipeline design, the tools used, and how to set up and use the pipeline.
###### Are all the required files present?
##### Play the role of a stakeholder:
As part of the evaluation process, conduct a simulated real-world scenario where the students assume the role of a DevOps engineer and explain their solution to a team or stakeholder. Evaluate their understanding of the concepts and technologies used in the project, as well as their ability to communicate effectively and think critically about their solution.
During the roleplay, ask them the following questions:
- Can you explain the concept of DevOps and its benefits for the software development lifecycle?
- How do DevOps principles help improve collaboration between development and operations teams?
- What are some common DevOps practices, and how did you incorporate them into your project?
- How does automation play a key role in the DevOps process, and what tools did you use to automate different stages of your project?
- Can you discuss the role of continuous integration and continuous deployment (CI/CD) in a DevOps workflow, and how it helps improve the quality and speed of software delivery?
- Can you explain the importance of infrastructure as code (IaC) in a DevOps environment, and how it helps maintain consistency and reproducibility in your project?
- How do DevOps practices help improve the security of an application, and what steps did you take to integrate security into your development and deployment processes?
- What challenges did you face when implementing DevOps practices in your project, and how did you overcome them?
- How can DevOps practices help optimize resource usage and reduce costs in a cloud-based environment?
- Can you explain the purpose and benefits of using GitLab and GitLab Runners in your project, and how they improve the development and deployment processes?
- What are the advantages of using Ansible for automation in your project, and how did it help you streamline the deployment of GitLab and GitLab Runners?
- Can you explain the concept of Infrastructure as Code (IaC) and how you implemented it using Terraform in your project?
- What is the purpose of using continuous integration and continuous deployment (CI/CD) pipelines, and how did it help you automate the building, testing, and deployment of your application?
- How did you ensure the security of the application throughout the pipeline stages?
- Can you explain the continuous integration (CI) pipeline you've implemented for each repository?
- Can you explain the continuous deployment (CD) pipeline you've implemented for each repository?
###### Do all of the students have a good understanding of the concepts and technologies used in the project?
###### Do all of the students have the ability to communicate effectively and explain their decisions?
###### Are all of the students capable of thinking critically about their solution and considering alternative approaches?
##### Review the GitLab and Runners Deployment:
###### Was the GitLab instance deployed and configured successfully using Ansible?
###### Are the GitLab Runners integrated with the existing pipeline and executing tasks as expected for all repositories?
##### Review the Infrastructure Pipeline:
###### Does the student deploy the infrastructure of the `cloud-design` project and the source code of `crud-master` project for two environments (staging, prod) on a cloud platform (e.g., AWS, Azure, or Google Cloud) using `Terraform`?
###### Are the two environments similar in design, resources and services used?
###### Does the student's infrastructure configuration exist in an independent repository with a configured pipeline?
###### Are the "Init", "Validate", "Plan", "Apply to Staging", "Approval", and "Apply to production environment" stages implemented correctly in the infrastructure pipeline?
##### Review the CI Pipeline:
- `Build`: Compile and package the application.
- `Test`: Run unit and integration tests to ensure code quality and functionality.
- `Scan`: Analyze the source code and dependencies for security vulnerabilities and coding issues. Consider using tools such as `SonarQube`, `Snyk`, or `WhiteSource`.
- `Containerization`: Package the applications into Docker images using a Dockerfile, and push the images to a container registry (e.g., Docker Hub, Google Container Registry, or AWS ECR).
###### Are the Build, Test, Scan, and Containerization stages implemented correctly in the CI pipeline for each repository?
##### Review the CD Pipeline:
- `Deploy to Staging`: Deploy the application to a `staging environment` for further testing and validation.
- `Approval`: Require manual approval to proceed with deployment to the `production environment`. This step should involve stakeholders and ensure the application is ready for production.
- `Deploy to Production`: Deploy the application to the `production environment`, ensuring zero downtime and a smooth rollout.
###### Are the "Deploy to Staging", "Approval", and "Deploy to Production" stages implemented correctly in the CD pipeline for each repository?
##### Review the functionality of pipelines:
###### Are the pipelines working properly and updating the application and infrastructure after each modification in each repository?
##### Check whether the students have effectively implemented the following cybersecurity guidelines:
`Restrict triggers to protected branches`: Ensure that the pipelines are triggered only on protected branches, preventing unauthorized users from deploying or tampering with the application. Check that access control measures are in place to minimize risk.
`Separate credentials from code`: Confirm that the students have not stored credentials in application code or infrastructure files. Look for the use of secure methods like secret management tools or environment variables to prevent exposure or unauthorized access.
`Apply the least privilege principle`: Assess if the students have limited user and service access to the minimum required level. This reduces potential damage in case of breaches or compromised credentials.
`Update dependencies and tools regularly`: Check if the students have a process for keeping dependencies and pipeline tools updated. Verify if they have automated updates and monitored for security advisories and patches to minimize security vulnerabilities.
###### Are triggers restricted to protected branches, ensuring unauthorized users cannot deploy or tamper with the application?
###### Have the students separated credentials from code, using secure methods like secret management tools or environment variables?
###### Did the students apply the least privilege principle to limit user and service access to the minimum required level?
###### Do the students have a process for updating dependencies and tools regularly, automating updates, and monitoring for security advisories and patches?
##### Review the Documentation:
###### Does the `README.md` file contain all the necessary information about the solution (prerequisites, setup, configuration, usage, ...)?
###### Is the documentation provided by the student clear and complete, including well-structured diagrams and thorough descriptions?
#### Bonus
###### +Did the student implemented any feature or anything that you would consider a bonus?
###### +Is this project an outstanding project?
#### General
##### Check the Repo content:
Files that must be inside the repository:
- CI/CD pipeline configuration files, scripts, and any other required artifacts.
- An Ansible playbook and used scripts for deploying and configuring a GitLab instance.
- A well-documented README file that explains the pipeline design, the tools used, and how to set up and use the pipeline.
###### Are all the required files present?
##### Play the role of a stakeholder:
As part of the evaluation process, conduct a simulated real-world scenario where the students assume the role of a DevOps engineer and explain their solution to a team or stakeholder. Evaluate their understanding of the concepts and technologies used in the project, as well as their ability to communicate effectively and think critically about their solution.
During the roleplay, ask them the following questions:
- Can you explain the concept of DevOps and its benefits for the software development lifecycle?
- How do DevOps principles help improve collaboration between development and operations teams?
- What are some common DevOps practices, and how did you incorporate them into your project?
- How does automation play a key role in the DevOps process, and what tools did you use to automate different stages of your project?
- Can you discuss the role of continuous integration and continuous deployment (CI/CD) in a DevOps workflow, and how it helps improve the quality and speed of software delivery?
- Can you explain the importance of infrastructure as code (IaC) in a DevOps environment, and how it helps maintain consistency and reproducibility in your project?
- How do DevOps practices help improve the security of an application, and what steps did you take to integrate security into your development and deployment processes?
- What challenges did you face when implementing DevOps practices in your project, and how did you overcome them?
- How can DevOps practices help optimize resource usage and reduce costs in a cloud-based environment?
- Can you explain the purpose and benefits of using GitLab and GitLab Runners in your project, and how they improve the development and deployment processes?
- What are the advantages of using Ansible for automation in your project, and how did it help you streamline the deployment of GitLab and GitLab Runners?
- Can you explain the concept of Infrastructure as Code (IaC) and how you implemented it using Terraform in your project?
- What is the purpose of using continuous integration and continuous deployment (CI/CD) pipelines, and how did it help you automate the building, testing, and deployment of your application?
- How did you ensure the security of the application throughout the pipeline stages?
- Can you explain the continuous integration (CI) pipeline you've implemented for each repository?
- Can you explain the continuous deployment (CD) pipeline you've implemented for each repository?
###### Do all of the students have a good understanding of the concepts and technologies used in the project?
###### Do all of the students have the ability to communicate effectively and explain their decisions?
###### Are all of the students capable of thinking critically about their solution and considering alternative approaches?
##### Review the GitLab and Runners Deployment:
###### Was the GitLab instance deployed and configured successfully using Ansible?
###### Are the GitLab Runners integrated with the existing pipeline and executing tasks as expected for all repositories?
##### Review the Infrastructure Pipeline:
###### Does the student deploy the infrastructure of the `cloud-design` project and the source code of `crud-master` project for two environments (staging, prod) on a cloud platform (e.g., AWS, Azure, or Google Cloud) using `Terraform`?
###### Are the two environments similar in design, resources and services used?
###### Does the student's infrastructure configuration exist in an independent repository with a configured pipeline?
###### Are the "Init", "Validate", "Plan", "Apply to Staging", "Approval", and "Apply to production environment" stages implemented correctly in the infrastructure pipeline?
##### Review the CI Pipeline:
- `Build`: Compile and package the application.
- `Test`: Run unit and integration tests to ensure code quality and functionality.
- `Scan`: Analyze the source code and dependencies for security vulnerabilities and coding issues. Consider using tools such as `SonarQube`, `Snyk`, or `WhiteSource`.
- `Containerization`: Package the applications into Docker images using a Dockerfile, and push the images to a container registry (e.g., Docker Hub, Google Container Registry, or AWS ECR).
###### Are the Build, Test, Scan, and Containerization stages implemented correctly in the CI pipeline for each repository?
##### Review the CD Pipeline:
- `Deploy to Staging`: Deploy the application to a `staging environment` for further testing and validation.
- `Approval`: Require manual approval to proceed with deployment to the `production environment`. This step should involve stakeholders and ensure the application is ready for production.
- `Deploy to Production`: Deploy the application to the `production environment`, ensuring zero downtime and a smooth rollout.
###### Are the "Deploy to Staging", "Approval", and "Deploy to Production" stages implemented correctly in the CD pipeline for each repository?
##### Review the functionality of pipelines:
###### Are the pipelines working properly and updating the application and infrastructure after each modification in each repository?
##### Check whether the students have effectively implemented the following cybersecurity guidelines:
`Restrict triggers to protected branches`: Ensure that the pipelines are triggered only on protected branches, preventing unauthorized users from deploying or tampering with the application. Check that access control measures are in place to minimize risk.
`Separate credentials from code`: Confirm that the students have not stored credentials in application code or infrastructure files. Look for the use of secure methods like secret management tools or environment variables to prevent exposure or unauthorized access.
`Apply the least privilege principle`: Assess if the students have limited user and service access to the minimum required level. This reduces potential damage in case of breaches or compromised credentials.
`Update dependencies and tools regularly`: Check if the students have a process for keeping dependencies and pipeline tools updated. Verify if they have automated updates and monitored for security advisories and patches to minimize security vulnerabilities.
###### Are triggers restricted to protected branches, ensuring unauthorized users cannot deploy or tamper with the application?
###### Have the students separated credentials from code, using secure methods like secret management tools or environment variables?
###### Did the students apply the least privilege principle to limit user and service access to the minimum required level?
###### Do the students have a process for updating dependencies and tools regularly, automating updates, and monitoring for security advisories and patches?
##### Review the Documentation:
###### Does the `README.md` file contain all the necessary information about the solution (prerequisites, setup, configuration, usage, ...)?
###### Is the documentation provided by the student clear and complete, including well-structured diagrams and thorough descriptions?
#### Bonus
###### +Did the student implemented any feature or anything that you would consider a bonus?
###### +Is this project an outstanding project?

15
subjects/devops/crud-master/README.md

@ -74,11 +74,10 @@ The message it receives should be a stringified JSON object as in this example:
```json
{
"user_id": "3",
"number_of_items": "5",
"total_amount": "180"
"user_id": "3",
"number_of_items": "5",
"total_amount": "180"
}
```
It will parse the message and create a new entry in the `orders` database.
@ -130,9 +129,9 @@ An example of POST request to `http://[API_GATEWAY_URL]:[API_GATEWAY_PORT]/api/b
```json
{
"user_id": "3",
"number_of_items": "5",
"total_amount": "180"
"user_id": "3",
"number_of_items": "5",
"total_amount": "180"
}
```
@ -192,7 +191,7 @@ PM2 can be used to start, stop, and list Node.js applications, as well as monito
Additionally, PM2 provides a number of features for managing multiple applications, such as load balancing and automatic restarts.
In our situation we will use it mainly to test resilience for messages sent to the Billing API when the API is not up and running.
In our situation we will use it mainly to test resilience for messages sent to the Billing API when the API is not up and running.
After entering in your VM via SSH you may run the following commands:

19
subjects/devops/crud-master/audit/README.md

@ -25,10 +25,11 @@
#### Inventory API Endpoints
##### Open Postman and make a `POST` request to `http://[GATEWAY_IP]:[GATEWAY_PORT]/api/movies/` address with the following body as `Content-Type: application/json`:
```json
{
"title": "A new movie",
"description": "Very short description"
"title": "A new movie",
"description": "Very short description"
}
```
@ -55,11 +56,12 @@
#### Billing API Endpoints
##### Open Postman and make a `POST` request to `http://[GATEWAY_IP]:[GATEWAY_PORT]/api/billing/` address with the following body as `Content-Type: application/json`:
```json
{
"user_id": "20",
"number_of_items": "99",
"total_amount": "250"
"user_id": "20",
"number_of_items": "99",
"total_amount": "250"
}
```
@ -70,11 +72,12 @@
###### Can you confirm the `billing_app` API was correctly stopped?
##### Open Postman and make a `POST` request to `http://[GATEWAY_IP]:[GATEWAY_PORT]/api/billing/` address with the following body as `Content-Type: application/json`:
```json
{
"user_id": "22",
"number_of_items": "10",
"total_amount": "50"
"user_id": "22",
"number_of_items": "10",
"total_amount": "50"
}
```

343
subjects/devops/orchestrator/README.md

@ -1,171 +1,172 @@
## Orchestrator
![Orchestrator](pictures/Orchestrator.jpg)
### Objectives
In this project, You will deploy a microservices' architecture on Kubernetes,
you will gain experience with key technologies and concepts such as Kubernetes
architecture, deployments, services, ingresses, and API gateways. Additionally,
this project will provide you with an opportunity to practice DevOps skills
such as containerization, continuous integration, and deployment (CI/CD), and
infrastructure as code (*IaC*) using Kubernetes manifests. By completing this
project, you will have a solid understanding of microservices architecture and
the tools and techniques used to deploy and manage such systems using
Kubernetes.
### Tips
- Spend time on the theory before rushing into the practice.
- Read the official documentation.
- You must understand the K8s components.
> Any lack of understanding of the concepts of this project may affect the
> difficulty of future projects, take your time to understand all concepts.
> Be curious and never stop searching!
### Architecture
![Architecture](pictures/Architecture.png)
You have to deploy this microservices' architecture in a K3s cluster
consisting of the following components:
- `inventory-database container` is a PostgreSQL database server that contains
your inventory database, it must be accessible via port `5432`.
- `billing-database container` is a PostgreSQL database server that contains
your billing database, it must be accessible via port `5432`.
- `inventory-app container` is a server that contains your
inventory-app code running and connected to the inventory database and
accessible via port `8080`.
- `billing-app container` is a server that contains your billing-app
code running and connected to the billing database and consuming the messages
from the RabbitMQ queue, and it can be accessed via port `8080`.
- `RabbitMQ container` is a RabbitMQ server that contains the queue.
- `api-gateway-app container` is a server that contains your
API gateway code running and forwarding the requests to the other
services, and it's accessible via port `3000`.
<!--TODO: add link to solution-->
> Use the Dockerfiles provided [here](...)
### The cluster
By using K3s in Vagrant you must create two virtual machines:
1. `Master`: the master in the K3s cluster.
2. `Agent`: an agent in the K3s cluster.
You must install `kubectl` on your machine to manage your cluster.
The nodes must be connected and available!
```console
$> kubectl get nodes -A NAME
STATUS ROLES AGE VERSION
<master-node> Ready <none> XdXh vX
<agent1-node> Ready <none> XdXh vX
$>
```
You must provide a `orchestrator.sh` script that run and create and manage the
infrastructure:
```console
$> ./orchestrator.sh create
cluster created
$> ./orchestrator.sh start
cluster started
$> ./orchestrator.sh stop cluster stopped $>
```
### Docker Hub
You will need to push the Docker images for each component to Docker Hub.
> You will use it in your Kubernetes manifests.
![Docker Hub example](pictures/dockerhub-example.jpg)
### Manifests
You should create a YAML Manifest that describes each component or resource of
your deployment.
### Secrets
You must store your passwords and credentials as a K8s secrets.
> It's forbidden to put your passwords and credentials in the YAML manifests,
> except the secret manifests!
### Applications deployment instructions
The following applications must be deployed as a deployment, and they
must be scaled horizontally automatically, depending on CPU consumption.
- `api-gateway`: max replication: 3 min replication: 1 CPU percent trigger: 60%
- `inventory-app`: max replication: 3 min replication: 1 CPU percent trigger:
60%
The `billing-app` must be deployed as *StatefulSet*.
### Databases
Your databases must be deployed as *StatefulSet* in your K3s cluster, and you
must create volumes that enable containers to move across infrastructure
without losing the data.
### Documentation
You must push a `README.md` file containing full documentation of your solution
(prerequisites, configuration, setup, usage, ...).
### Bonus
If you complete the mandatory part successfully, and you still have free time,
you can implement anything that you feel deserves to be a bonus, for example:
- Use the `Dockerfile` you have defined in your solution for
`play-with-containers`
- Deploy a Kubernetes Dashboard to monitor the cluster
- Deploy a dashboard for applications logs
- Kubernetes in cloud ?!
Challenge yourself!
### Submission and audit
You must submit the `README.md` file and all files used to create and delete
and manage your infrastructure: Vagrantfile, Dockerfiles, Manifests, ...
```console
.
├── Manifests
│ └── [...]
├── Scripts
│ └── [...]
├── Dockerfiles
│ └── [...]
└── Vagrantfile
```
If you decide to use a different structure for your project remember you should
be able to explain and justify your decision during the audit.
> In the audit you will be asked different questions about the concepts and the
> practice of this project, prepare yourself!
#### What's next?
In order to develop your knowledge and career as a DevOps engineer, we highly
recommend you to learn and practice more about Kubernetes and even get a
certification for Kubernetes.
[https://kubernetes.io/training/](https://kubernetes.io/training/)
## Orchestrator
![Orchestrator](pictures/Orchestrator.jpg)
### Objectives
In this project, You will deploy a microservices' architecture on Kubernetes,
you will gain experience with key technologies and concepts such as Kubernetes
architecture, deployments, services, ingresses, and API gateways. Additionally,
this project will provide you with an opportunity to practice DevOps skills
such as containerization, continuous integration, and deployment (CI/CD), and
infrastructure as code (_IaC_) using Kubernetes manifests. By completing this
project, you will have a solid understanding of microservices architecture and
the tools and techniques used to deploy and manage such systems using
Kubernetes.
### Tips
- Spend time on the theory before rushing into the practice.
- Read the official documentation.
- You must understand the K8s components.
> Any lack of understanding of the concepts of this project may affect the
> difficulty of future projects, take your time to understand all concepts.
> Be curious and never stop searching!
### Architecture
![Architecture](pictures/Architecture.png)
You have to deploy this microservices' architecture in a K3s cluster
consisting of the following components:
- `inventory-database container` is a PostgreSQL database server that contains
your inventory database, it must be accessible via port `5432`.
- `billing-database container` is a PostgreSQL database server that contains
your billing database, it must be accessible via port `5432`.
- `inventory-app container` is a server that contains your
inventory-app code running and connected to the inventory database and
accessible via port `8080`.
- `billing-app container` is a server that contains your billing-app
code running and connected to the billing database and consuming the messages
from the RabbitMQ queue, and it can be accessed via port `8080`.
- `RabbitMQ container` is a RabbitMQ server that contains the queue.
- `api-gateway-app container` is a server that contains your
API gateway code running and forwarding the requests to the other
services, and it's accessible via port `3000`.
<!--TODO: add link to solution-->
> Use the Dockerfiles provided [here](...)
### The cluster
By using K3s in Vagrant you must create two virtual machines:
1. `Master`: the master in the K3s cluster.
2. `Agent`: an agent in the K3s cluster.
You must install `kubectl` on your machine to manage your cluster.
The nodes must be connected and available!
```console
$> kubectl get nodes -A NAME
STATUS ROLES AGE VERSION
<master-node> Ready <none> XdXh vX
<agent1-node> Ready <none> XdXh vX
$>
```
You must provide a `orchestrator.sh` script that run and create and manage the
infrastructure:
```console
$> ./orchestrator.sh create
cluster created
$> ./orchestrator.sh start
cluster started
$> ./orchestrator.sh stop cluster stopped $>
```
### Docker Hub
You will need to push the Docker images for each component to Docker Hub.
> You will use it in your Kubernetes manifests.
![Docker Hub example](pictures/dockerhub-example.jpg)
### Manifests
You should create a YAML Manifest that describes each component or resource of
your deployment.
### Secrets
You must store your passwords and credentials as a K8s secrets.
> It's forbidden to put your passwords and credentials in the YAML manifests,
> except the secret manifests!
### Applications deployment instructions
The following applications must be deployed as a deployment, and they
must be scaled horizontally automatically, depending on CPU consumption.
- `api-gateway`: max replication: 3 min replication: 1 CPU percent trigger: 60%
- `inventory-app`: max replication: 3 min replication: 1 CPU percent trigger:
60%
The `billing-app` must be deployed as _StatefulSet_.
### Databases
Your databases must be deployed as _StatefulSet_ in your K3s cluster, and you
must create volumes that enable containers to move across infrastructure
without losing the data.
### Documentation
You must push a `README.md` file containing full documentation of your solution
(prerequisites, configuration, setup, usage, ...).
### Bonus
If you complete the mandatory part successfully, and you still have free time,
you can implement anything that you feel deserves to be a bonus, for example:
- Use the `Dockerfile` you have defined in your solution for
`play-with-containers`
- Deploy a Kubernetes Dashboard to monitor the cluster
- Deploy a dashboard for applications logs
- Kubernetes in cloud ?!
Challenge yourself!
### Submission and audit
You must submit the `README.md` file and all files used to create and delete
and manage your infrastructure: Vagrantfile, Dockerfiles, Manifests, ...
```console
.
├── Manifests
│ └── [...]
├── Scripts
│ └── [...]
├── Dockerfiles
│ └── [...]
└── Vagrantfile
```
If you decide to use a different structure for your project remember you should
be able to explain and justify your decision during the audit.
> In the audit you will be asked different questions about the concepts and the
> practice of this project, prepare yourself!
#### What's next?
In order to develop your knowledge and career as a DevOps engineer, we highly
recommend you to learn and practice more about Kubernetes and even get a
certification for Kubernetes.
[https://kubernetes.io/training/](https://kubernetes.io/training/)

18
subjects/devops/orchestrator/audit/README.md

@ -4,11 +4,10 @@
The repo contains a `README.md`, an `orchestrator.sh` script, a `Vagrantfile`
and all the additional files used to create, delete and manage the submitted
infrastructure.
infrastructure.
###### Are all the required files present?
###### Does the project as a structure similar to the one below? If not, can the student provide a justification for the chosen project structure?
```console
@ -22,7 +21,6 @@ infrastructure.
└── Vagrantfile
```
##### Ask the following questions to the group or student
- What is container orchestration, and what are its benefits?
@ -67,7 +65,7 @@ $>
###### Was the cluster created by a Vagrantfile?
###### Does the cluster contain two nodes (*master* and *agent*)?
###### Does the cluster contain two nodes (_master_ and _agent_)?
###### Are the nodes connected and ready for usage?
@ -133,7 +131,7 @@ user:~$
###### Are all the required applications deployed?
- Databases must be deployed as *StatefulSet*, and volumes that enable containers to move across infrastructure without losing the data must be created.
- Databases must be deployed as _StatefulSet_, and volumes that enable containers to move across infrastructure without losing the data must be created.
- The following applications must be deployed as a deployment, and they must be scaled horizontally automatically, depending on CPU consumption:
@ -147,17 +145,17 @@ user:~$
min replication: 1
CPU percent trigger: 60%
The `billing-app` must be deployed as *StatefulSet*.
The `billing-app` must be deployed as _StatefulSet_.
###### Do all apps deploy with the correct configuration?
##### Ask the following questions to the group or student
- What is *StatefulSet* in K8s?
- What is _StatefulSet_ in K8s?
- What is *deployment* in K8s?
- What is _deployment_ in K8s?
- What is the difference between *deployment* and *StatefulSet* in K8s?
- What is the difference between _deployment_ and _StatefulSet_ in K8s?
- What is scaling, and why do we use it?
@ -228,7 +226,7 @@ In less than 15 minutes and with the help of Google the student must explain all
#### Bonus
###### +Did the student used his/her own `play-with-container` solution instead of the provided one?
###### +Did the student used his/her own `play-with-container` solution instead of the provided one?
###### +Did the student add any optional bonus?

1
subjects/devops/play-with-containers/README.md

@ -50,6 +50,7 @@ You have to implement this architecture:
![architecture](./resources/play-with-containers-py.png)
<!--TODO: add link to solution-->
You will use the services described in the `crud-master` project. [Here](...)
is a working solution that you can use to solve this project.

19
subjects/devops/play-with-containers/audit/README.md

@ -173,10 +173,11 @@ user:~$
#### Inventory API Endpoints
##### Open Postman and make a `POST` request to `http://[GATEWAY_IP]:[GATEWAY_PORT]/api/movies/` address with the following body as `Content-Type: application/json`:
```json
{
"title": "A new movie",
"description": "Very short description"
"title": "A new movie",
"description": "Very short description"
}
```
@ -189,11 +190,12 @@ user:~$
#### Billing API Endpoints
##### Open Postman and make a `POST` request to `http://[GATEWAY_IP]:[GATEWAY_PORT]/api/billing/` address with the following body as `Content-Type: application/json`:
```json
{
"user_id": "20",
"number_of_items": "99",
"total_amount": "250"
"user_id": "20",
"number_of_items": "99",
"total_amount": "250"
}
```
@ -204,11 +206,12 @@ user:~$
###### Can you confirm the `billing-app` container was correctly stopped?
##### Open Postman and make a `POST` request to `http://[GATEWAY_IP]:[GATEWAY_PORT]/api/billing/` address with the following body as `Content-Type: application/json`:
```json
{
"user_id": "22",
"number_of_items": "10",
"total_amount": "50"
"user_id": "22",
"number_of_items": "10",
"total_amount": "50"
}
```

130
subjects/devops/road-to-dofd/README.md

@ -1,65 +1,65 @@
## Road-To-DOFD
### Objective
The objective of this subject is to prepare you for the DevOps Foundation Certification exam from the DevOps Institute. The DevOps Foundation Certification is designed for individuals who have a basic understanding of DevOps principles and practices. It validates their knowledge of DevOps concepts, culture, collaboration, and tools.
### Why?
Certification in DevOps is crucial as it validates individuals' expertise, enhances their credibility, and opens up new career opportunities. Certification also establishes industry standards and promotes consistency within the DevOps community, ensuring a high level of competence and professionalism.
### Exam Overview
The DevOps Foundation Certification exam covers the following domains:
1. DevOps Basics
2. Culture and Organization
3. Processes and Practices
4. Automation and Tooling
5. Measurement and Metrics
6. Sharing and Communication
### Study Materials
You should refer to the following study materials to prepare for the exam:
- DevOps Institute DevOps Foundation Exam Guide English version: [DOFD-v3.3-English-Exam-Study-Guide_01Mar2021.pdf](https://www.devopsinstitute.com/wp-content/uploads/2021/03/DOFD-v3.3-English-Exam-Study-Guide_01Mar2021.pdf)
This Exam Guide provides an overview of the topics covered in the exam, learning objectives, questions, and recommended study resources.
### Exam Preparation Tips
We recommend you to follow these tips to enhance their exam preparation:
1. Understand DevOps Principles: Gain a clear understanding of DevOps principles, values, and goals. Learn how DevOps promotes collaboration, automation, and continuous improvement in software development and IT operations.
2. Study the Exam Syllabus: Review the exam syllabus provided by the DevOps Institute. Understand the topics, learning objectives, and weightage of each domain. Focus your study efforts accordingly.
3. Read Recommended Books and Resources: Read books, articles, and whitepapers recommended by the DevOps Institute. These resources provide in-depth knowledge of DevOps practices, tools, and case studies.
4. Practice with Sample Questions: Solve sample questions regularly to familiarize yourself with the exam format and types of questions. Identify areas where you need improvement and revisit the relevant study materials.
5. Join DevOps Communities: Engage with DevOps communities, forums, and discussion groups. Participate in discussions, ask questions, and learn from the experiences of others in the field.
6. Hands-on Experience: Gain practical experience by working on real-world projects or simulations. Practice implementing DevOps practices, automation, and tools in a test environment.
7. Time Management: Develop a study plan and allocate sufficient time for each domain. Practice time management during the exam to ensure you answer all questions within the given time limit.
### Certification Exam
You can get more information or schedule the DevOps Foundation Certification exam through this link:
https://www.devopsinstitute.com/certifications/devops-foundation/
### Exam Tips
- If you are using a voucher to launch the certification, make sure that you choose the right certification to redeem the voucher in the exam platform (Cause you might redeem the DOFD voucher in different certifications by mistake).
- Not recommended to use an AI tool to answer the exam questions (He may incorrectly answer questions).
- You should be in a clean and quiet room in order to concentrate at the time of the exam.
- During the exam read the questions carefully.
### Audit
Your effort for the certification preparation will be audited through a series of questions and topics covered in an audit session. You will be asked to provide explanations, demonstrate your understanding, and answer questions related to the DevOps Foundation Certification exam. The audit will assess your knowledge and proficiency in the domains covered by the exam guide, ensuring that you are well-prepared to successfully pass the certification exam.
## Road-To-DOFD
### Objective
The objective of this subject is to prepare you for the DevOps Foundation Certification exam from the DevOps Institute. The DevOps Foundation Certification is designed for individuals who have a basic understanding of DevOps principles and practices. It validates their knowledge of DevOps concepts, culture, collaboration, and tools.
### Why?
Certification in DevOps is crucial as it validates individuals' expertise, enhances their credibility, and opens up new career opportunities. Certification also establishes industry standards and promotes consistency within the DevOps community, ensuring a high level of competence and professionalism.
### Exam Overview
The DevOps Foundation Certification exam covers the following domains:
1. DevOps Basics
2. Culture and Organization
3. Processes and Practices
4. Automation and Tooling
5. Measurement and Metrics
6. Sharing and Communication
### Study Materials
You should refer to the following study materials to prepare for the exam:
- DevOps Institute DevOps Foundation Exam Guide English version: [DOFD-v3.3-English-Exam-Study-Guide_01Mar2021.pdf](https://www.devopsinstitute.com/wp-content/uploads/2021/03/DOFD-v3.3-English-Exam-Study-Guide_01Mar2021.pdf)
This Exam Guide provides an overview of the topics covered in the exam, learning objectives, questions, and recommended study resources.
### Exam Preparation Tips
We recommend you to follow these tips to enhance their exam preparation:
1. Understand DevOps Principles: Gain a clear understanding of DevOps principles, values, and goals. Learn how DevOps promotes collaboration, automation, and continuous improvement in software development and IT operations.
2. Study the Exam Syllabus: Review the exam syllabus provided by the DevOps Institute. Understand the topics, learning objectives, and weightage of each domain. Focus your study efforts accordingly.
3. Read Recommended Books and Resources: Read books, articles, and whitepapers recommended by the DevOps Institute. These resources provide in-depth knowledge of DevOps practices, tools, and case studies.
4. Practice with Sample Questions: Solve sample questions regularly to familiarize yourself with the exam format and types of questions. Identify areas where you need improvement and revisit the relevant study materials.
5. Join DevOps Communities: Engage with DevOps communities, forums, and discussion groups. Participate in discussions, ask questions, and learn from the experiences of others in the field.
6. Hands-on Experience: Gain practical experience by working on real-world projects or simulations. Practice implementing DevOps practices, automation, and tools in a test environment.
7. Time Management: Develop a study plan and allocate sufficient time for each domain. Practice time management during the exam to ensure you answer all questions within the given time limit.
### Certification Exam
You can get more information or schedule the DevOps Foundation Certification exam through this link:
https://www.devopsinstitute.com/certifications/devops-foundation/
### Exam Tips
- If you are using a voucher to launch the certification, make sure that you choose the right certification to redeem the voucher in the exam platform (Cause you might redeem the DOFD voucher in different certifications by mistake).
- Not recommended to use an AI tool to answer the exam questions (He may incorrectly answer questions).
- You should be in a clean and quiet room in order to concentrate at the time of the exam.
- During the exam read the questions carefully.
### Audit
Your effort for the certification preparation will be audited through a series of questions and topics covered in an audit session. You will be asked to provide explanations, demonstrate your understanding, and answer questions related to the DevOps Foundation Certification exam. The audit will assess your knowledge and proficiency in the domains covered by the exam guide, ensuring that you are well-prepared to successfully pass the certification exam.

78
subjects/devops/road-to-dofd/audit/README.md

@ -1,39 +1,39 @@
#### General
##### Ask the student to describe what they have done to prepare for the DevOps Institute Foundation Examination.
###### Did the student utilize the recommended study materials for the exam?
###### Was the student familiar with the DevOps Institute DevOps Foundation Exam Guide?
###### Did the student read relevant books, articles, and whitepapers on DevOps?
###### Did the student show proficiency in the domains covered in the exam guide?
##### Ask the student to describe what they have learned while preparing for the DevOps Institute Foundation Examination.
###### Does the student demonstrate a clear understanding of DevOps principles and concepts?
###### Is the student able to explain the benefits of DevOps practices in software development and IT operations?
###### Is the student able to explain the importance of DevOps culture and collaboration?
###### Does the student showcase knowledge of key DevOps processes and practices?
###### Is the student able to identify common automation and tooling used in DevOps?
###### Does the student understand the significance of measurement and metrics in DevOps?
###### Is the student able to explain the importance of sharing and communication in a DevOps environment?
###### Is the student able to provide examples of real-world implementations of DevOps practices?
##### Choose 15 questions from the Sample Examination 1 and 2 sections in the [Exam guide](https://www.devopsinstitute.com/wp-content/uploads/2021/03/DOFD-v3.3-English-Exam-Study-Guide_01Mar2021.pdf).
You can find the answers in the ANSWER KEY after each Sample Examination section.
###### Did the learner answer at least 10 questions correctly?
#### Bonus
###### +Did the learner answer more than 14 questions correctly?
#### General
##### Ask the student to describe what they have done to prepare for the DevOps Institute Foundation Examination.
###### Did the student utilize the recommended study materials for the exam?
###### Was the student familiar with the DevOps Institute DevOps Foundation Exam Guide?
###### Did the student read relevant books, articles, and whitepapers on DevOps?
###### Did the student show proficiency in the domains covered in the exam guide?
##### Ask the student to describe what they have learned while preparing for the DevOps Institute Foundation Examination.
###### Does the student demonstrate a clear understanding of DevOps principles and concepts?
###### Is the student able to explain the benefits of DevOps practices in software development and IT operations?
###### Is the student able to explain the importance of DevOps culture and collaboration?
###### Does the student showcase knowledge of key DevOps processes and practices?
###### Is the student able to identify common automation and tooling used in DevOps?
###### Does the student understand the significance of measurement and metrics in DevOps?
###### Is the student able to explain the importance of sharing and communication in a DevOps environment?
###### Is the student able to provide examples of real-world implementations of DevOps practices?
##### Choose 15 questions from the Sample Examination 1 and 2 sections in the [Exam guide](https://www.devopsinstitute.com/wp-content/uploads/2021/03/DOFD-v3.3-English-Exam-Study-Guide_01Mar2021.pdf).
You can find the answers in the ANSWER KEY after each Sample Examination section.
###### Did the learner answer at least 10 questions correctly?
#### Bonus
###### +Did the learner answer more than 14 questions correctly?

Loading…
Cancel
Save