From b37fa9c39eba4c1b972e2fe2fa852c718a6f8cc8 Mon Sep 17 00:00:00 2001 From: Badr Ghazlane Date: Sat, 25 Sep 2021 19:13:30 +0200 Subject: [PATCH] fix: structure day03 exercises --- .../week02/day03/audit/readme.md | 0 .../week02/day03/ex01/audit/readme.md | 21 +++ .../week02/day03/ex01/readme.md | 21 +++ .../week02/day03/ex02/audit/readme.md | 20 +++ .../week02/day03/ex02/readme.md | 31 +++++ .../week02/day03/ex03/audit/readme.md | 17 +++ .../week02/day03/ex03/readme.md | 35 +++++ .../week02/day03/ex04/audit/readme.md | 17 +++ .../week02/day03/ex04/readme.md | 19 +++ .../week02/day03/ex05/audit/readme.md | 59 +++++++++ .../week02/day03/ex05/readme.md | 123 ++++++++++++++++++ .../week02/day03/ex06/audit/readme.md | 11 ++ .../week02/day03/ex06/readme.md | 28 ++++ one_exercise_per_file/week02/day03/readme.md | 35 +++++ 14 files changed, 437 insertions(+) delete mode 100644 one_exercise_per_file/week02/day03/audit/readme.md create mode 100644 one_exercise_per_file/week02/day03/ex01/audit/readme.md create mode 100644 one_exercise_per_file/week02/day03/ex01/readme.md create mode 100644 one_exercise_per_file/week02/day03/ex02/audit/readme.md create mode 100644 one_exercise_per_file/week02/day03/ex02/readme.md create mode 100644 one_exercise_per_file/week02/day03/ex03/audit/readme.md create mode 100644 one_exercise_per_file/week02/day03/ex03/readme.md create mode 100644 one_exercise_per_file/week02/day03/ex04/audit/readme.md create mode 100644 one_exercise_per_file/week02/day03/ex04/readme.md create mode 100644 one_exercise_per_file/week02/day03/ex05/audit/readme.md create mode 100644 one_exercise_per_file/week02/day03/ex05/readme.md create mode 100644 one_exercise_per_file/week02/day03/ex06/audit/readme.md create mode 100644 one_exercise_per_file/week02/day03/ex06/readme.md diff --git a/one_exercise_per_file/week02/day03/audit/readme.md b/one_exercise_per_file/week02/day03/audit/readme.md deleted file mode 100644 index e69de29..0000000 diff --git a/one_exercise_per_file/week02/day03/ex01/audit/readme.md b/one_exercise_per_file/week02/day03/ex01/audit/readme.md new file mode 100644 index 0000000..8ecb4b3 --- /dev/null +++ b/one_exercise_per_file/week02/day03/ex01/audit/readme.md @@ -0,0 +1,21 @@ +1. This question is validated if the `imp_mean.statistics_` returns: + + ```console + array([ 4., 13., 6.]) + ``` + +2. This question is validated if the filled train set is: + + ```console + array([[ 7., 6., 5.], + [ 4., 13., 5.], + [ 1., 20., 8.]]) + ``` + +3. This question is validated if the filled test set is: + + ```console + array([[ 4., 1., 2.], + [ 7., 13., 9.], + [ 4., 2., 4.]]) + ``` \ No newline at end of file diff --git a/one_exercise_per_file/week02/day03/ex01/readme.md b/one_exercise_per_file/week02/day03/ex01/readme.md new file mode 100644 index 0000000..232533a --- /dev/null +++ b/one_exercise_per_file/week02/day03/ex01/readme.md @@ -0,0 +1,21 @@ +# Exercise 1 Imputer 1 + +The goal of this exercise is to learn how to use an Imputer to fill missing values on basic example. + +```python +train_data = [[7, 6, 5], + [4, np.nan, 5], + [1, 20, 8]] +``` + +1. Fit the `SimpleImputer` on the data. Print the `statistics_`. Check that the statistics match `np.nanmean(train_data, axis=0)`. + +2. Fill the missing values in `train_data` using the fitted imputer and `transform`. + +3. Fill the missing values in `test_data` using the fitted imputer and `transform`. + +```python +test_data = [[np.nan, 1, 2], + [7, np.nan, 9], + [np.nan, 2, 4]] +``` \ No newline at end of file diff --git a/one_exercise_per_file/week02/day03/ex02/audit/readme.md b/one_exercise_per_file/week02/day03/ex02/audit/readme.md new file mode 100644 index 0000000..ec274ba --- /dev/null +++ b/one_exercise_per_file/week02/day03/ex02/audit/readme.md @@ -0,0 +1,20 @@ +1. This question is validated if the scaled train set is: + +```console +array([[ 0. , -1.22474487, 1.33630621], + [ 1.22474487, 0. , -0.26726124], + [-1.22474487, 1.22474487, -1.06904497]]) +``` + +- The mean on axis 0 should return: + - array([0., 0., 0.]) +- The std on axis 0 should return: + - array([1., 1., 1.]) + +2. This question is validated if the scaled test set is: + +```console +array([[ 1.22474487, -1.22474487, 0.53452248], + [ 2.44948974, 3.67423461, -1.06904497], + [ 0. , 1.22474487, 0.53452248]]) +``` diff --git a/one_exercise_per_file/week02/day03/ex02/readme.md b/one_exercise_per_file/week02/day03/ex02/readme.md new file mode 100644 index 0000000..3ff51b2 --- /dev/null +++ b/one_exercise_per_file/week02/day03/ex02/readme.md @@ -0,0 +1,31 @@ + +# Exercise 2 Scaler + +The goal of this exercise is to learn to scale a data set. There are various scaling techniques, we will focus on `StandardScaler` from scikit learn. + +We will use a tiny data set for this exercise that we will generate by ourselves: + +```python +X_train = np.array([[ 1., -1., 2.], + [ 2., 0., 0.], + [ 0., 1., -1.]]) +``` + +1. Fit the `StandardScaler` on the data and scale X_train using `fit_transform`. Compute the `mean` and `std` on `axis 0`. + +2. Scale the test set using the `StandardScaler` fitted on the train set. + +```python +X_test = np.array([[ 2., -1., 1.], + [ 3., 3., -1.], + [ 1., 1., 1.]]) +``` + +**WARNING: +If the data is split in train and test set, it is extremely important to apply the same scaling the test data. As the model is trained on scaled data, if it takes as input unscaled data, it returns incorrect values.** + +Resources: + +- https://medium.com/technofunnel/what-when-why-feature-scaling-for-machine-learning-standard-minmax-scaler-49e64c510422 + +- https://scikit-learn.org/stable/modules/preprocessing.html \ No newline at end of file diff --git a/one_exercise_per_file/week02/day03/ex03/audit/readme.md b/one_exercise_per_file/week02/day03/ex03/audit/readme.md new file mode 100644 index 0000000..b9b7b2b --- /dev/null +++ b/one_exercise_per_file/week02/day03/ex03/audit/readme.md @@ -0,0 +1,17 @@ +1. This question is validated if the output is + + | | ('C++',) | ('Java',) | ('Python',) | + |---:|-----------:|------------:|--------------:| + | 0 | 0 | 0 | 1 | + | 1 | 0 | 1 | 0 | + | 2 | 0 | 1 | 0 | + | 3 | 1 | 0 | 0 | + +2. This question is validated if the output is: + + | | ('C++',) | ('Java',) | ('Python',) | + |---:|-----------:|------------:|--------------:| + | 0 | 0 | 0 | 1 | + | 1 | 0 | 1 | 0 | + | 2 | 0 | 0 | 0 | + | 3 | 1 | 0 | 0 | diff --git a/one_exercise_per_file/week02/day03/ex03/readme.md b/one_exercise_per_file/week02/day03/ex03/readme.md new file mode 100644 index 0000000..a51ba92 --- /dev/null +++ b/one_exercise_per_file/week02/day03/ex03/readme.md @@ -0,0 +1,35 @@ +# Exercise 3 One hot Encoder + +The goal of this exercise is to learn how to deal with Categorical variables using the OneHot Encoder. + +```python +X_train = [['Python'], ['Java'], ['Java'], ['C++']] +``` + +1. Using `OneHotEncoder` with `handle_unknown='ignore'`, fit the One Hot Encoder and transform X_train. The expected output is: + + | | ('C++',) | ('Java',) | ('Python',) | + |---:|-----------:|------------:|--------------:| + | 0 | 0 | 0 | 1 | + | 1 | 0 | 1 | 0 | + | 2 | 0 | 1 | 0 | + | 3 | 1 | 0 | 0 | + + To get this output create a DataFrame from the transformed X_train and the attribute `categories_`. + +2. Transform X_test using the fitted One Hot Encoder on the train set. + +```python +X_test = [['Python'], ['Java'], ['C'], ['C++']] +``` + +The expected output is: + + | | ('C++',) | ('Java',) | ('Python',) | + |---:|-----------:|------------:|--------------:| + | 0 | 0 | 0 | 1 | + | 1 | 0 | 1 | 0 | + | 2 | 0 | 0 | 0 | + | 3 | 1 | 0 | 0 | + +- https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html diff --git a/one_exercise_per_file/week02/day03/ex04/audit/readme.md b/one_exercise_per_file/week02/day03/ex04/audit/readme.md new file mode 100644 index 0000000..b590567 --- /dev/null +++ b/one_exercise_per_file/week02/day03/ex04/audit/readme.md @@ -0,0 +1,17 @@ +1. This question is validated if the output of the Ordinal Encoder on the train set is: + +```console +array([[2.], + [0.], + [1.]]) +``` + +Check that `enc.categories_` returns`[array(['bad', 'neutral', 'good'], dtype=object)]`. + +2. This question is validated if the output of the Ordinal Encoder on the test set is: + +```console +array([[2.], + [2.], + [0.]]) +``` \ No newline at end of file diff --git a/one_exercise_per_file/week02/day03/ex04/readme.md b/one_exercise_per_file/week02/day03/ex04/readme.md new file mode 100644 index 0000000..6f5d33f --- /dev/null +++ b/one_exercise_per_file/week02/day03/ex04/readme.md @@ -0,0 +1,19 @@ +# Exercise 4 Ordinal Encoder + +The goal of this exercise is to learn how to deal with Categorical variables using the Ordinal Encoder. + +In that case, we want the model to consider that: **good > neutral > bad** + +```python +X_train = [['good'], ['bad'], ['neutral']] +``` + +1. Fit the `OrdinalEncoder` by specifying the categories in the following order: `categories=[['bad', 'neutral', 'good']]`. Transform the train set. Print the `categories_` + +2. Transform the X_test using the fitted Ordinal Encoder on train set. + +```python +X_test = [['good'], ['good'], ['bad']] +``` + +*Note: In the version 0.22 of Scikit-learn, the Ordinal Encoder doesn't handle new values in the test set. But it will be possible in the version 0.24 !* \ No newline at end of file diff --git a/one_exercise_per_file/week02/day03/ex05/audit/readme.md b/one_exercise_per_file/week02/day03/ex05/audit/readme.md new file mode 100644 index 0000000..bbd1d94 --- /dev/null +++ b/one_exercise_per_file/week02/day03/ex05/audit/readme.md @@ -0,0 +1,59 @@ +1. This question is validated if the number of unique values per feature outputted are: + +```console +age 3 +menopause 11 +tumor-size 7 +inv-nodes 2 +node-caps 3 +deg-malig 2 +breast 5 +breast-quad 2 +irradiat 2 +dtype: int64 +``` + +2. This question is validated if the transformed test set by the `OneHotEncoder` fitted on the train set is: + + ```console + First 10 rows: + + array([[1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 1., 0.], + [1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 1., 0.], + [0., 1., 1., 0., 0., 1., 0., 0., 0., 0., 1., 0., 1.], + [0., 1., 1., 0., 0., 1., 0., 0., 0., 0., 1., 1., 0.], + [1., 0., 1., 0., 0., 0., 1., 0., 0., 1., 0., 0., 1.], + [1., 0., 1., 0., 0., 0., 0., 1., 0., 1., 0., 1., 0.], + [1., 0., 0., 1., 0., 0., 0., 0., 1., 1., 0., 1., 0.], + [1., 0., 0., 1., 0., 1., 0., 0., 0., 1., 0., 1., 0.], + [1., 0., 1., 0., 0., 0., 0., 1., 0., 0., 1., 0., 1.], + [1., 0., 0., 1., 0., 1., 0., 0., 0., 1., 0., 0., 1.]]) + ``` + +3. This question is validated if the transformed test set by the `OrdinalEncoder` fitted on the train set is: + + ```console + First 10 rows: + + array([[2., 2., 0., 1.], + [2., 2., 0., 0.], + [2., 4., 5., 2.], + [1., 5., 1., 1.], + [2., 5., 0., 2.], + [1., 1., 0., 1.], + [1., 8., 0., 1.], + [2., 2., 0., 0.], + [2., 5., 0., 2.], + [1., 3., 0., 0.]]) + ``` + +4. This question is validated if the column transformer transformed that is fitted on the X_train, transformed the X_test as: + +```console +# First 2 rows: + +array([[1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 1., 0., 2., 2., 0., + 1.], + [1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 1., 0., 2., 2., 0., + 0.]]) +``` \ No newline at end of file diff --git a/one_exercise_per_file/week02/day03/ex05/readme.md b/one_exercise_per_file/week02/day03/ex05/readme.md new file mode 100644 index 0000000..8972e1f --- /dev/null +++ b/one_exercise_per_file/week02/day03/ex05/readme.md @@ -0,0 +1,123 @@ +# Exercise 5 Categorical variables + +The goal of this exercise is to learn how to deal with Categorical variables with Ordinal Encoder, Label Encoder and One Hot Encoder. + +Preliminary: + +- Load the breast-cancer.csv file +- Drop `Class` column +- Drop NaN values +- Split the data in a train set and test set (test set size = 20% of the total size) with `random_state=43`. + +1. Count the number of unique values per feature in the train set. + +2. Identify the variables ordinal variables, nominal variables and the target. Create one One Hot Encoder for all categorical features (no ordinal). Here are the assumptions made on the variables: + +```console +age: Ordinal +['ge40'> 'premeno' >'lt40'] + +menopause: Ordinal +['50-54' > '45-49' > '40-44' > '35-39' > '30-34' > '25-29'> '20-24' > '15-19' > '10-14' > '5-9' > '0-4'] + +tumor-size: Ordinal +['15-17' > '12-14' > '9-11' > '6-8' > '3-5' > '0-2'] + +inv-nodes: One Hot +['yes' 'no'] + +node-caps: Ordinal +[3 > 2 > 1] + +deg-malig: One Hot +['left' 'right'] + +breast: One Hot +['right_low' 'left_low' 'left_up' 'central' 'right_up'] + +breast-quad: One Hot +['yes' 'no'] + +irradiat: One Hot +['recurrence-events' 'no-recurrence-events'] +``` + +- Fit on the train set + +- Transform the test set + +Example of expected output: + +```console +# One Hot encoder on: ['inv-nodes', 'deg-malig', 'breast', 'breast-quad', 'irradiat'] + +input: ohe.transform(df[ohe_cols]) +output: +array([[0., 1., 0., ..., 0., 0., 1.], + [1., 0., 0., ..., 0., 1., 0.], + [1., 0., 1., ..., 0., 0., 1.], + ..., + [0., 1., 0., ..., 0., 1., 0.], + [1., 0., 0., ..., 0., 1., 0.], + [1., 0., 1., ..., 0., 1., 0.]]) + +input: ohe.get_feature_names(ohe_cols) +output: +array(['inv-nodes_no', 'inv-nodes_yes', 'deg-malig_left', + 'deg-malig_right', 'breast_central', 'breast_left_low', + 'breast_left_up', 'breast_right_low', 'breast_right_up', + 'breast-quad_no', 'breast-quad_yes', + 'irradiat_no-recurrence-events', 'irradiat_recurrence-events'], + dtype=object) + +``` + +3. Create one Ordinal encoder for all Ordinal features. The documentation of Scikit-learn is not clear on how to perform this on many columns at the same time. Here's a **hint**: + +If the ordinal data set is (subset of two columns but I keep all rows for this example): + + | | age | node-caps | + |---:|:--------|------------:| + | 0 | premeno | 3 | + | 1 | ge40 | 1 | + | 2 | ge40 | 2 | + | 3 | premeno | 3 | + | 4 | premeno | 2 | + +The first step is to create a dictionnary: + +```console +dict_ = {0: ['lt40', 'premeno' , 'ge40'], 1:[1,2,3]} +``` + +Then to instantiate an `OrdinalEncoder`: + +```console +oe = OrdinalEncoder(dict_) +``` + +Now that you have enough information: + +- Fit on the train set +- Transform the test set + +4. Use a `make_column_transformer` to combine the two Encoders. + +- Fit on the train set +- Transform the test set + +*Hint: Check the first ressource* + +**Note: The version 0.22 of Scikit-learn can't handle `get_feature_names` on `OrdinalEncoder`. If the column transformer contains an `OrdinalEncoder`, the method returns this error**: + +```console +AttributeError: Transformer ordinalencoder (type OrdinalEncoder) does not provide get_feature_names. +``` + +**It means that if you want to use the Ordinal Encoder, you will have to create a variable that contains the columns name in the right order. This step is not required in that exercise** + +Ressources: + +- https://towardsdatascience.com/guide-to-encoding-categorical-features-using-scikit-learn-for-machine-learning-5048997a5c79 + +- https://machinelearningmastery.com/one-hot-encoding-for-categorical-data/ \ No newline at end of file diff --git a/one_exercise_per_file/week02/day03/ex06/audit/readme.md b/one_exercise_per_file/week02/day03/ex06/audit/readme.md new file mode 100644 index 0000000..c365502 --- /dev/null +++ b/one_exercise_per_file/week02/day03/ex06/audit/readme.md @@ -0,0 +1,11 @@ +1. This question is validated if the prediction on the test set are: + +```console +array([0, 0, 2, 1, 2, 0, 2, 1, 1, 1, 0, 1, 2, 0, 1, 1, 0, 0, 2, 2, 0, 0, + 0, 2, 2, 2, 0, 1, 0, 0, 1, 0, 1, 1, 2, 2, 1, 2, 1, 1, 1, 2, 1, 2, + 0, 1, 1, 1, 1, 1]) +``` + +and the score on the test set is **98%**. + +**Note: Keep in mind that having a 98% accuracy is not common on real life data. Every time you have a score > 97% check that there's no leakage in the data. On financial data set, the ratio signal to noise is low. Trying to forecast stock prices is a difficult problem. Having an accuracy higher than 70% should be interpreted as a warning to check data leakage !** diff --git a/one_exercise_per_file/week02/day03/ex06/readme.md b/one_exercise_per_file/week02/day03/ex06/readme.md new file mode 100644 index 0000000..24d1c11 --- /dev/null +++ b/one_exercise_per_file/week02/day03/ex06/readme.md @@ -0,0 +1,28 @@ +# Exercise 6 Pipeline + +The goal of this exercise is to learn to use the Scikit-learn object: Pipeline. The data set: used for this exercise is the `iris` data set. + +Preliminary: + +- Run the code below. + + ```console + iris = load_iris() + X, y = iris['data'], iris['target'] + + #add missing values + X[[1,20,50,100,135], 0] = np.nan + X[[2,5,88,135], 1] = np.nan + X[[4,15], 2] = np.nan + X[[40,135], 3] = np.nan + ``` + +- Split the data set in a train set and test set (33%), fit the Pipeline on the train set and predict on the test set. Use `random_state=43`. + +The pipeline you will implement has to contain 3 steps: + +- Imputer (median) +- Standard Scaler +- LogisticRegression + +1. Train the pipeline on the train set and predict on the test set. Give the score of the model on the test set. \ No newline at end of file diff --git a/one_exercise_per_file/week02/day03/readme.md b/one_exercise_per_file/week02/day03/readme.md index e69de29..3aa7898 100644 --- a/one_exercise_per_file/week02/day03/readme.md +++ b/one_exercise_per_file/week02/day03/readme.md @@ -0,0 +1,35 @@ +# W2D03 Piscine AI - Data Science + +# Table of Contents: + +# Introduction + +Today we will focus on the data preprocessing and discover the Pipeline object from scikit learn. + +1. Manage categorical variables with Integer encoding and One Hot Encoding +2. Impute the missing values +3. Reduce the dimension of the data +4. Scale the data + +- The **step 1** is always necessary. Models use numbers, for instance string data can't be processed raw. +- The **steps 2** is always necessary. Machine learning models use numbers, missing values do not have mathematical representations, that is why the missing values have to be imputed. +- The **step 3** is required when the dimension of the data set is high. The dimension reduction algorithms reduce the dimensionality of the data either by selecting the variables that contain most of the information (SelectKBest) or by transforming the data. Depending on the signal in the data and the data set size the dimension reduction is not always required. This step is not covered because of its complexity. The understanding of the theory behind is important. However, I suggest to give it a try during the projects. This article gives an introduction. + +- https://towardsdatascience.com/dimensionality-reduction-for-machine-learning-80a46c2ebb7e + +- The **step 4** is required when using some type of Machine Learning algorithms. The Machine Learning algorithms that require the feature scaling are mostly KNN (K-Nearest Neighbors), Neural Networks, Linear Regression, and Logistic Regression. The reason why some algorithms work better with feature scaling is that the minimization of the loss function may be more difficult if each feature's range is completely different. More details: + +- https://medium.com/@societyofai/simplest-way-for-feature-scaling-in-gradient-descent-ae0aaa383039#:~:text=Feature%20scaling%20is%20an%20idea,of%20convergence%20of%20gradient%20descent. + +These steps are sequential. The output of step 1 is used as input for step 2 and so on; and, the output of step 4 is used as input for the Machine Learning model. +Scikitlearn proposes an object: Pipeline. + +As we know, the model evaluation methodology requires to split the data set in a train set and test set. **The preprocessing is learned/fitted on the training set and applied on the test set**. + +- https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html + +This object takes as input the preprocessing transforms and a Machine Learning model. Then this object can be called the same way a Machine Learning model is called. This is pretty practical because we do not need anymore to carry many objects. + +## Ressources + +TODO \ No newline at end of file