Browse Source

docs(crud-master): include api-gateway part

move documentation to gateway part
add tree vue of file structure
DEV-4415-crud-master-api-definition
Michele Sessa 1 year ago
parent
commit
b00ff75247
  1. 114
      subjects/devops/crud-master/README.md
  2. 160
      subjects/devops/crud-master/audit/README.md

114
subjects/devops/crud-master/README.md

@ -3,46 +3,50 @@
### Instructions
APIs are a very common and convenient way to deploy services in a modular way.
In this exercise we will create a simple API infrastructure, having an API gateway connected with two other APIs.
In this exercise we will create a simple API infrastructure, having an API Gateway connected with two other APIs.
Those two APIs will in turn get data from two distinct databases.
The communication between APIs will be done by using HTTP and message queuing systems.
All those services will be in turn encapsulated in different virtual machines.
#### General overview
We will setup an e-commerce system, where one API (`inventory`) will take track of the orders and another one (`billing`) will process the payments.
We will setup a movie streaming platform, where one API (`inventory`) will have information on the movies available and another one (`orders`) will process the payments.
The API gateway will communicate in HTTP with `inventory` and using RabbitMQ for `billing`.
The API gateway will communicate in HTTP with `inventory` and using RabbitMQ for `orders`.
In this exercise you will need to install Node.js (with Express, Sequelize and other packages), PostgreSQL and RabbitMQ.
While it may seems overwhelming at first there is a lot of resources available both on official website and on community blogs about setting up those tools.
Also the specific configuration details may change from platform to platform so don't hesitate to play around with it and be sure to check everything is installed correctly before to move on.
#### API 1: Inventory
##### Definition of the API
This API will be a CRUD (Create, Read, Update, Delete) RESTful API.
It will interact with a PostgreSQL database.
It will provide information about the products presents in the inventory of the store and allow users to do basic operations on it.
It will use a PostgreSQL database.
It will provide information about the movies present in the inventory and allow users to do basic operations on it.
A common way to do so is to use Express (TODO add a link?) which is a popular Node.js web framework.
We will couple it with Sequelize (TODO add a link?), an ORM (TODO add a link?) which will abstract and simplify the interactions between our API and the database.
Here are the entrypoints with the possible HTTP requests:
Here are the endpoints with the possible HTTP requests:
- `/products`: GET, POST, DELETE
- `/products/:id`: GET, PUT, DELETE
- `/products/available`: GET
- `/movies`: GET, POST, DELETE
- `/movies/:id`: GET, PUT, DELETE
Some details about each one of them:
- `GET /products` retrieve all the products.
- `GET /products?title=[name]` retrieve all the products with `name` in the title.
- `POST /products` create a new product entry.
- `DELETE /products` delete all products in the database.
- `GET /movies` retrieve all the movies.
- `GET /movies?title=[name]` retrieve all the movies with `name` in the title.
- `POST /movies` create a new product entry.
- `DELETE /movies` delete all movies in the database.
- `GET /products/:id` retrieve a single product by `id`.
- `PUT /products/:id` update a single product by `id`.
- `DELETE /products/:id` delete a single product by `id`.
- `GET /movies/:id` retrieve a single movie by `id`.
- `PUT /movies/:id` update a single movie by `id`.
- `DELETE /movies/:id` delete a single movie by `id`.
- `GET /products/available` retrieve all available the products.
- `GET /movies/available` retrieve all available the movies.
The API should work on `http://localhost:8080/`.
@ -50,26 +54,80 @@ The API should work on `http://localhost:8080/`.
##### Defining the Database
The database will be a PostgreSQL and it will be called `products`.
Each product will have the following columns:
The database will done with PostgreSQL and it will be called `movies`.
Each movie will have the following columns:
- `id`: autogenerated unique identifier.
- `title`: the title of the product.
- `description`: the description of the product.
- `available`: a boolean, it will be `true` if the product is available and `false` otherwise.
- `title`: the title of the movie.
- `description`: the description of the movie.
(TODO add link on how to setup it?)
##### Testing the API
In order to test the correctness of your API you should use Postman (TODO add a link?). You could create one or more tests for every entrypoint and then export the configuration so you will be able to reproduce the tests on different machines easily.
In order to test the correctness of your API you should use Postman (TODO add a link?). You could create one or more tests for every endpoint and then export the configuration so you will be able to reproduce the tests on different machines easily.
##### Documenting the API
#### API 2: Orders
Good documentation is a very critical feature of every API. By design the APIs are meant for others to use, so there have been very good efforts to create standard and easy to implement ways to document it.
#### The API Gateway
As an introduction to the art of documenting you must SwaggerHub (TODO add a link?) with at least a meaningful description for each entrypoint. Feel free to implement any further and more complex feature.
The Gateway will be the only service accessible by the user, it will take care of routing the requests to the appropriate API using the right protocol (it could be HTTP for API1 or RabbitMQ for API2).
#### API 2: Billing
The API Gateway should work on `http://localhost:3000/`.
#### The API Gateway
##### Interfacing with API1
The gateway will route all requests to `/api/movies` at the API1, without any need to check the information passed through it.
It will return the exact response received by the API1.
In order to achieve this goal it will be necessary to setup a proxy system.
You could check `http-proxy-middleware` npm package in order to achieve such goal.
##### Interfacing with API2
The gateway will receive POST requests from `api/orders` and send a message using RabbitMQ in a queue called `task_queue`.
The content of the message will be the POST request body stringified with `JSON.stringify`.
##### Documenting the API
Good documentation is a very critical feature of every API. By design the APIs are meant for others to use, so there have been very good efforts to create standard and easy to implement ways to document it.
As an introduction to the art of great documentation you must create an OpenAPI documentation file for the API Gateway. There is many different ways to do so, a good start could be using SwaggerHub (TODO add a link?) with at least a meaningful description for each endpoint. Feel free to implement any further and more complex feature.
#### Overall file structure
You can organize your internal file structure as you prefer. That said here is a common way to structure this kind of projects that may help you:
```console
.
├── inventory
│ ├── app
│ │ ├── config
│ │ ├── controllers
│ │ ├── models
│ │ └── routes
│ ├── node_modules
│ ├── package.json
│ ├── package-lock.json
│ └── server.js
├── orders
│ ├── app
│ │ ├── config
│ │ ├── controllers
│ │ └── models
│ ├── node_modules
│ ├── package.json
│ ├── package-lock.json
│ └── server.js
└── api-gateway
├── node_modules
├── package.json
├── package-lock.json
├── proxy.js
├── routes.js
└── server.js
```
You should be able to start the API Gateway and the two APIs by using the command `node server.js` inside their respective directories.
The Gateway should be able to send messages to the API2 even if that API is not running. When the API2 will be started it should be able to process that message and send an acknowledgement back.

160
subjects/devops/crud-master/audit/README.md

@ -0,0 +1,160 @@
#### Functional
##### Before we start, let's make sure we have a fresh environment by running this command `rm -dr logs backups backup_schedules.txt`.
###### Can you confirm that the `backup_manager.py` and `backup_service.py` are present?
##### Run the following command `python3 ./backup_manager.py create "test2;18:15;backup_test2"`.
###### Can you confirm that the `backup_schedules.txt` was created?
##### Run the following command `cat backup_schedules.txt`.
```bash
$ cat backup_schedules.txt
test2;18:15;backup_test2
```
###### Was the schedule created properly, with the same format as the example above?
##### Run the following command `python3 ./backup_manager.py stop`.
###### Can you confirm that the `logs` folder was created and inside of it is the `backup_manager.log`?
##### Now run the command `cat logs/backup_manager.log`.
```bash
$ cat logs/backup_manager.log
[13/02/2023 17:14] Error: cannot kill the service
```
###### Can you confirm that the `backup_manager.log` file contains an error like the one above, stating that the service is already stopped?
##### Create a couple more backup schedules using the command `python3 ./backup_manager.py create "file;HH:MM;backup_name"`.
##### Run the command `python3 ./backup_manager.py list`.
```bash
$ python3 ./backup_manager.py list
0: test2;18:15;backup_test2
1: test3;18:16;backup_test3
2: test4;18:17;backup_test4
```
###### Did you get a result like the one above, with the schedule you created attached with an index at the beginning, on your terminal?
##### Run the command `python3 ./backup_manager.py delete 1`.
```bash
$ python3 ./backup_manager.py list
0: test2;18:15;backup_test2
1: test4;18:17;backup_test4
```
###### Was the task with the index `1` removed from the list like in the example above?
###### Did the task with the index `2` became the task with the index `1` after you removed the old one from the list?
##### Verify that the following command runs without errors: `python3 backup_manager.py start`.
###### Can you confirm that the `backup_service.py` process is running? (For example you could use `ps -ef | grep backup_service`).
##### Now run the command `python3 backup_manager.py start` again.
```bash
$ python3 ./backup_manager.py start
$ cat logs/backup_manager.log
...
[13/02/2023 17:14] Error: service already running
```
###### Can you confirm if the last log is stating that the service is already running? (You can use `cat logs/backup_manager.log` to confirm).
##### Run `python3 backup_manager.py stop` and then `rm -dr logs backups backup_schedules.txt`. Let's create the backup manually.
##### Run the following command by order:
```bash
mkdir testing testing2; touch testing/file1 testing/file2 testing/file3
python3 ./backup_manager.py create "testing;[Current_hour];backup_test".
python3 ./backup_manager.py create "testing;13:11;backup_test".
python3 ./backup_manager.py start
python3 ./backup_manager.py backups
```
Supposing the commands were run at 18:21, here is an example of the commands above:
```console
$ ls
backup_manager.py backup_service.py testing/
$ python3 ./backup_manager.py create "testing;18:21;backup_test"
$ python3 ./backup_manager.py create "testing2;13:11;passed_time_backup"
$ python3 ./backup_manager.py list
0: testing;18:21;backup_test
1: testing;13:11;passed_time_backup
$ python3 ./backup_manager.py start
$ python3 ./backup_manager.py backups
backup_test.tar
$ ls
backup_manager.py backups/ backup_schedules.txt backup_service.py logs/ testing/
$ cat logs/backup_manager.log
[13/02/2023 18:21] Schedule created
[13/02/2023 18:21] Show schedule list
[13/02/2023 18:21] Backup list
$ cat backup_schedules.txt
testing;18:21;backup_test
$ cat logs/backup_service.log
[13/02/2023 18:21] Backup done for testing in backups/backup_test.tar
```
##### Follow the example above and then open the `backup_test.tar` file to ensure that the backup process was successful. Verify that the files are not empty or damaged and that it matches the original directory.
###### Was the backup created successfully?
###### Can you confirm that the `passed_time_backup` was not created successfully because the time has already passed?
##### Run `python3 backup_manager.py stop` and then `rm -dr logs backups backup_schedules.txt`.
##### Create a `.zip` with some folders and files inside it and then replicate the steps you just did above to check if the backup is created successfully.
###### Was the backup created successfully? (open the backup and verify that the files are not empty or damaged and that it matches the original directory.)
### Error handling
##### Verify error handling for incorrect commands and incorrect arguments. For example, confirm that an error message is logged when attempting to run the command `python3 backup_manager.py invalid_command`.
```bash
$ python3 backup_manager.py invalid_command
$ cat logs/backup_service.log
...
[13/02/2023 18:21] Error: unknown instruction
```
###### Was the error message logged correctly?
##### Run `python3 backup_manager.py stop` and then `rm -dr logs backups backup_schedules.txt`. After that run the following commands:
```bash
python3 backup_manager.py stop
python3 ./backup_manager.py create "wrong_format"
python3 ./backup_manager.py backups
python3 ./backup_manager.py start
python3 ./backup_manager.py start
```
```console
$ cat logs/backup_manager.log
[14/02/2023 15:07] Error: cannot kill the service
[14/02/2023 15:07] Error: invalid schedule format
[14/02/2023 15:08] Error: [Errno 2] No such file or directory: "./backups"
[14/02/2023 15:08] Error: service already running
$ cat logs/backup_service.log
[14/02/2023 15:11] Error: cannot open backup_schedules
$
```
###### Can you confirm that all error messages have been saved in the log files like in the example above?
##### Go to the code of the project adn check if the creator used `try` and `except` to handle the errors.
###### Did he use `try` and `except` to handle the errors?
Loading…
Cancel
Save