Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
67 views23 pages

Kaggle Competition Guide Part 1

This document discusses an overview of how to win a Kaggle competition in data science based on a Coursera course. It covers competition mechanics, types of machine learning algorithms, software and hardware requirements, and recaps main ML algorithms like linear models, tree-based models and neural networks.

Uploaded by

a v
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views23 pages

Kaggle Competition Guide Part 1

This document discusses an overview of how to win a Kaggle competition in data science based on a Coursera course. It covers competition mechanics, types of machine learning algorithms, software and hardware requirements, and recaps main ML algorithms like linear models, tree-based models and neural networks.

Uploaded by

a v
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

Get unlimited access to the best of Medium for less than $1/week. Become a member

How to win a Kaggle competition in Data


Science (via Coursera): part 1/5
Eric Perbos-Brinck · Follow
9 min read · Apr 25, 2018

Listen Share More

Source: Coursera

These are my notes from the 5-weeks course on Coursera, as taught by a team of
data scientists and Kaggle GrandMasters.
https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 1/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

## Week 1 ##
by Alexander Guschin, GM #5, Yandex, lecturer at MIPT
Mikhail Trofimo, PhD student at CCAS

Learning Objectives
Describe competition mechanics

Compare real life applications and competitions

Summarize reasons to participate in data science competitions

Describe main types of ML algorithms

Describe typical hardware and software requirements

Analyze decision boundries of different classifiers

Use standard ML libraries

1. Introduction and course overview


Among all topics of data science, competitive data analysis is especially interesting.
For an experienced specialist this is a great area to try his skills against other people
and learn some new tricks; and for a novice this a good start to quickly and playfully
learn basics of practical data science. For both, engaging in a competition is a good
chance to expand the knowledge and get acquainted with new people.

Week #1:
. Describe competition mechanics
. Compare real life applications and competitions
. Summarize reasons to participate in data science competitions
. Describe main types of ML algorithms
. Describe typical hardware and software requirements
. Analyze decision boundries of different classifiers
. Feature preprocessing and generation with respect to models
. Feature extractions from text and images

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 2/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

Week #2:
. Exploratory Data Analysis (EDA)
. EDA examples and visualizations
. Inspect the data and find golden features
. Validation: risk of overfitting, strategies and problems
. Data leakages

Week #3:
. Metrics optimization in a competition, new metrics
. Advanced Feature Engineering I: mean encoding, regularization,
generalizations

Week #4:
. Hyperparameter Optimization
. Tips and Tricks
. Advanced Feature Engineering II: matrix factorization for feature extraction,
tSNE, feature interactions
. Ensembling

Week #5:
. Competition “walk-through” examples
. Final project

2. Competition mechanics
2.1. There is a great variety of competitions: NLP, Time-Series, Computer Vision.

But they all share the same structure:


. Data is supplied with description
. An Evaluation function is given
. You build a model and use the Submission file
. Your submission is scored in a Leaderboard with Public and Private Test sets
The Public set is used during the competition, the Private one for final ranking
. You can submit between 2 and 5 files per day.

Why participate in a competition ?


. Great opportunity for learning and networking

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 3/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

. Interesting non-trivial tasks and state-of-the-art approaches


. A way to get recognition inside the Data Science community, and possible job
offers

2.2. Kaggle overview

Walk-through of a Kaggle competition (Zillow home evaluation):


. Overview with description, evaluation, prizes and timeline
. Data provided by the organizer with description
. Public kernels created by participants, can be used as a starting point, especially
the EDA.
. Discussion: the organizer can provide additional information and answer
questions
. Leaderboard: shows the best score of each participant, and number of
submissions. Calculated on Public set during competition.
. Rules
. Team: you can create a team with other participants, check the rules and beware of
the max number of submissions allowed (unique participants vs team)

2.3. Real-World Applications vs Competitions

Real world ML pipeline is a complicated process, including:


. Understanding the business problem
. Formalize the problem (what is a spam ?)
. Collect the data
. Clean and preprocess the data
. Choose a model
. Define an evaluation of the model in real life
. Inference speed
. Deploy the model to users

Competitions focus only on:


. Clean and preprocess the data
. Choose a model

ML competitions are a great way to learn but they don’t address the questions of
formalization, deployment and testing.

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 4/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

Don’t limit yourself: it’s OK to use Heuristics and Manual Data Analysis.
Don’t be afraid of complex solutions, advanced feature engineering, huge
calculations, ensembling.
The ultimate goal is to achieve the highest score in the Metric value.

3. Recap of main ML algorithms


3.1. Main ML algorithms

Linear models: try to separate data points with a plane, into 2 subspaces
ex: Logistic regression, Support Vector Machines (SVM)
Available in Scikit-Learn or Vowpal Wabbit

Tree-based: use Decision Trees (DT) like Random Forest and Gradient Boosted
Decision Trees (GBDT)
Applies a “Divide and Conquer” approach by splitting the data into sub-spaces
or boxes based on probabilities of outcome
In general, DT models are very powerful for tabular data; but rather weak to
capture linear dependencies as it requires a lot of splits.
Available in Sickit-Learn, XGBoost, LightGBM

kNN: K-Nearest-Neighbors, looks for nearest data points. Close objects are likely
to have the same labels.

Neural Networks: often seen as a “black-box”, can be very efficient for Images,
Sounds, Text and Sequences.
Available in TensorFlow, PyTorch, Keras

No Free Lunch Theorem: there’s not a single method that outperforms all the others
for all the tasks.

3.2. Disclaimer

If you don’t know much about basic ML algorithms, check those links before taking
the quizz.

Random Forest: http://www.datasciencecentral.com/profiles/blogs/random-


forests-explained-intuitively

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 5/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

Gradient Boosting:
http://arogozhnikov.github.io/2016/06/24/gradient_boosting_explained.html

kNN:
https://www.analyticsvidhya.com/blog/2014/10/introduction-k-neighbours-
Open in app
algorithm-clustering/
1

3.3. Additional Materials and Links

Covers Scikit-Learn library with kNN, Linear Models, Decision Trees.


Plus H2O documentation on algorithms and parameters.
. Vowpal Wabbit
. XGBoost
. LightGBM
. Neural Nets with Keras, PyTorch, TensorFlow, MXNet & Lasagne
https://www.coursera.org/learn/competitive-data-
science/supplement/AgAOD/additional-materials-and-links

4. Software and Hardware requirements


4.1. Hardware
Get a PC with a recent Nvidia GPU, a CPU with 6-cores and 32gb of RAM.
A fast storage (hard drive) is critical, especially for Computer Vision, so a SSD is a
must, a NVMe even better.
Otherwise use cloud services like AWS but beware of the operating costs vs. a
dedicated PC.

4.2. Software
Linux (Ubuntu with Anaconda) is best, some key libraries aren’t available on
Windows.
. Python is today’s favorite as it supports a massive pool of libraries for ML.
. Numpy for linear algebra, Pandas for dataframes (like SQL), Scikit-Learn for classic
ML algorithms.
. Matplotlib for plotting.
. Jupyter Notebook as an IDE (Integrated Development Environment).

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 6/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

. XGBoost and LightGBM for gradient-boosted decision trees.


. TensorFlow/Keras and PyTorch for Neural Networks.

4.3. Links for installation and documentations


https://www.coursera.org/learn/competitive-data-
science/supplement/Djqi7/additional-material-and-links

5. Feature preprocessing and generation with respect to models


5.1. Overview with Titanic on Kaggle

Features: numeric, categorical (Red, Green, Blue), ordinal (old<renovated<new),


datetime, coordinates, interval
https://stats.idre.ucla.edu/other/mult-pkg/whatstat/what-is-the-difference-
between-categorical-ordinal-and-interval-variables/

Feature preprocessing example: one-hot-encoding (like “pclass” in Titanic)


Sometimes Decision-Trees (DT) do not require it but linear models do, if the
feature doesn’t have a clear linear dependency (like survival rate vs pclass).
RF (Random Forests) can easily overcome this challenge.

Feature generation: in the case of sales forecasts per day (ie. strong linear
potential), it may help to add Week_number or Day_of_week.
These can help both linear and DT models.

Feature preprocessing is often necessary.


Feature generation is a powerful technique.
But they both depend on the model type (DT vs Linear vs NN)

5.2. Numeric features

5.2.1. Feature Preprocessing: Decision-Trees (DT) vs non-DT models

Scaling: DT try to find the best split for a feature, no matter the scale.
kNN, Linear or NN are very sensitive to scaling differences.

MinMaxScale
Scale to [0, 1]: sklearn.preprocessing.MinMaxScaler

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 7/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

X = (X — X.min) / (X.max — X.min)

StandardScale
Scale to mean=0, std=1 : sklearn.preprocessing.StandardScaler
X = (X — X.mean) / X.std

In general case, for a non-DT model: we apply a chosen transformation to ALL


numeric features.

Outliers: we can clip for 1st and 99th percentiles, aka “winsorization” in
financial data.

Rank: can better option than MinMaxScaler in case of Outliers present (and
unclipped), good for non-DT.
scipy.stats.rankdata
Imp: must be applied to both Train and Test together.

Log transform as np.log(1 + x), or raising to the power<1 as np.sqrt(x + 2/3):


They bring too big values closer together. Especially good for NN.

Advanced techniques for non-DT: concatenate dataframes produced by


different pre-processings, or ensembling models from different pre-
processings.

5.2.2. Feature Generation: based on EDA and business knowledge.

Easy one: with Sqm and Price features, we can generate a new feature
“Price/Sqm”
Or generating fractional part of a value, like 1.99€ -> 0.99; 2.49€ -> 0.49

Advanced one: generating time interval by a user typing a message (for spambot
detection)

Conclusion: DT don’t depend on scaling but non-DT hugely depend on it.


Most used preprocessings: MinMaxScaler, StandardScaler, Rank, np.log(1+x) and
np.sqrt(1+x)
Generation is powered by EDA and business knowledge.

5.3. Categorical and ordinal features

5.3.1. Feature Preprocessing:


https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 8/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

There are three Categorical features in the Titanic dataset: Sex, Cabin, Embarked
(Port’s name)
Reminder on Ordinal classification examples:
Pclass (1,2,3) as ordered categorical feature or
Driver’s license type (A, B, C, D) or
Education level (kindergarden, school, college, bachelor, master, doctoral)

A. One technique is Label Encoding (replaces categories by numbers)


Good for DT, not so for non-DT.

For Embarked (S for Southampton, C for Cherbourg, Q for Queenstown)


- Alphabetical (sorted): [S,C,Q] -> [2,1,3] with sklearn.preprocessing.LabelEncoder
- Order of Appearance: [S,C,Q] -> [1,2,3] with Pandas.factorize

- Frequency encoding: [S,C,Q] -> [0.5, 0.3, 0.2], better for non-DT as it preserves
information about value distribution, but still great for DT.

B. Another technique is One-hot Encoding, (0,0,1) or (0,1,0) for each row


pandas.get_dummies, sklearn.preprocessing.OneHotEncoder
Great for non-DT, plus it’s scaled (min=0, max=1).

Warning: if too many unique values in category, then one-hot generates too many
columns with lots of zero-values.
Then to save RAM, maybe use sparse matrices and store only non-zero elements
(tip: if non-zero values far less than 50% total).

5.3.2. Feature Generation for categorical feature:


(more in next lessons)

5.4. Datetime and Coordinates features

A. Date & Time:

‘Periodicity’ (Day number in Week, Month, Year, Season) is used to capture


repetitive patterns.

‘Time since’ drug was taken, or last holidays, or numbers of days left before etc.
Can be Row-independent moment (ex: since 00:00:00 UTC, 1 january 1970) or
Row-dependent (since last drug taken, last holidays, numbers of days left before
etc.)
https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 9/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

‘Difference between dates’ for Churn prediction, like “Last_purchase_date —


Last_call_date = Date_diff”

B. Coordinates:

Distance to nearest POI (subway, school, hospital, police etc)

You can also use Clusters based on new features and use “Distance to cluster’s
center coords”.

Or create Aggregate Stats, such as “Number of Flats in Area” or “Mean Realty


Price in Area”

Advanced tip: look for Rotate on coords

5.5. Handling missing values

Types of missing values: NaN, empty string, ‘-1’ (replacing missing values in
[0,1]), very large number, ‘-99999’, ‘999’ etc.

Fillna approaches:
-999, -1 or
Mean & median or
“isnull” binary feature can be beneficial or
Reconstruct the missing value if possible (best approach)

Do not fill NaNs before Feature generation: this can pollute the data (ex: “Time
since” or Frequency/Label Encoding) and screw the model.

XGboost can handle “NaN”, to try.

Treating Test values not present in train data: Frequency encoding in Train can
help as it will look for Frequency in Test as well.

6. Feature extraction from text and images


6.1. Bag of Words (BOW)

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 10/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

Source: Coursera

For Titanic, we can extract information/patterns from the passengers’ names such
as their family members/siblings or their titles (Lord, Princess)

How-to: sklearn.feature_extraction.text.CountVectorizer
Creates 1 column per unique word, and counts its occurence per row (phrase).

A. Text preprocessing

Lowercase: Very->very

Lemmatization: democracy, democratic, democratization -> democracy


(requires good dictionary, corpus )

Stemming: democracy, democratic, democratization -> democr

Stopwords: get rid of articles, prepositions and very common words, uses NLTK
(Natural Language ToolKit)
‘sklearn.feature_extraction.text.CountVectorizer’ with max_df

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 11/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

B. N-grams for sequences of words or characters, can help to use local context
‘sklearn.feature_extraction.text.CountVectorizer’ with Ngram_range and analyzer

C. TFiDF for postprocessing (required to scale features for non-DT)

TF: Term Frequency (in % per row, sum = 1), followed by

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 12/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

iDF: Inverse Document Frequency (to boost rare words vs frequent words)
‘sklearn.feature_extraction.text.TfidfVectorizer’

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 13/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

6.2. Using Word Vectors and ConvNets

A. Word Vectors

Word2vec converts each word to some vector in a space with hundreds of


dimensions, creates embeddings between words oftn used together in the same
context.
King with Man, Queen with Woman.
King-Queen = Man-Woman (in vector size)

Other Word Vectors: Glove, FastText

Sentences: Doc2vec

There are pretrained models, like on Wikipedia.


Note: preprocessing can be applied BEFORE using Word2vec

B. Comparing BOW vs w2v (Word2vec)

BOW: very large vectors, meaning of each value in vector is known

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 14/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

w2v: smaller vectors, values in vector rarely interpreted, words with similar
meaning often have similar embeddings

C. Quick intro on extracting features from Images with CNNs


(covered in details in later lessons)

Finetuning or transfer-learning

Data augmentation

Next week : Exploratory Data Analysis (EDA) and Data Leakages

Machine Learning Kaggle Data Science Artificial Intelligence Deep Learning

Follow

Written by Eric Perbos-Brinck


199 Followers

Deep Learning practitioner// Founder: BravoNestor, Comptoir-Hydroponique, Maison-Kokoon, My-Tesla-in-


Paris// Carrefour Hypermarket executive. Insead MBA:1 PhD:0

More from Eric Perbos-Brinck

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 15/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

Eric Perbos-Brinck in Towards Data Science

RTX 2060 Vs GTX 1080Ti in Deep Learning GPU Benchmarks: Cheapest


RTX vs. Most Expensive GTX card.
I trained Cifar-10 and Cifar-100 on a PC, comparing my GTX 1080Ti for a RTX 2060, using
Resnet with Fast.ai and PyTorch.

6 min read · Feb 17, 2019

1.1K 9

Eric Perbos-Brinck

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 16/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

Fast.ai Deep Learning for Coders, part 1 (2017):


the complete collection of video timelines

6 min read · Feb 26, 2018

19

Eric Perbos-Brinck

I finalized the Video Timelines for the 12 lessons of “Intro to Machine


Learning”.
Note: I changed the format a bit, adding more details/keywords for search purpose, as Jeremy
dives deeper into explaining his…

13 min read · Mar 20, 2018

25

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 17/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

Eric Perbos-Brinck

Fast.ai Deep Learning for Coders, Part 1 (v2) 2018: the Ultimate Collection
of Video Timelines
As a follow-up to the 2017 Part 1 and 2017 Part 2 video timelines, please find below the
complete collection of video timelines for 2018…

10 min read · Apr 10, 2018

31 1

See all from Eric Perbos-Brinck

Recommended from Medium

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 18/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

The PyCoach in Artificial Corner

Python in Excel Will Reshape How Data Analysts Work


Microsoft just announced Python in Excel. Here’s how it’ll change the way Python and Excel
analysts work.

· 5 min read · Aug 24

1.8K 30

Avi Chawla in Towards Data Science

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 19/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

It’s Time to Say GoodBye to pd.read_csv() and pd.to_csv()


Discussing another major caveat of Pandas

4 min read · May 27, 2022

2.6K 50

Lists

Predictive Modeling w/ Python


20 stories · 325 saves

Natural Language Processing


561 stories · 180 saves

Practical Guides to Machine Learning


10 stories · 362 saves

ChatGPT prompts
24 stories · 316 saves

Unbecoming

10 Seconds That Ended My 20 Year Marriage


It’s August in Northern Virginia, hot and humid. I still haven’t showered from my morning trail
run. I’m wearing my stay-at-home mom…

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 20/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

· 4 min read · Feb 16, 2022

61K 910

Waleed Mousa in Artificial Intelligence in Plain English

The Ultimate Data Scientist Roadmap: From Beginner to Mastery


Embarking on a journey to become a data scientist can be both exciting and overwhelming,
given the vast array of skills and knowledge…

5 min read · Mar 23

98 3

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 21/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

Maximilian Vogel in MLearning.ai

The ChatGPT list of lists: A collection of 3000+ prompts, examples, use-


cases, tools, APIs…
Updated Aug 20, 2023. Added prompt design courses, masterclasses and tutorials.

10 min read · Feb 8

7.3K 79

Pep Justin

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 22/23
9/2/23, 10:25 AM How to win a Kaggle competition in Data Science (via Coursera): part 1/5 | by Eric Perbos-Brinck | Medium

45 Secret Websites & Ways to Make Money Online in 2023


In the last few years, there has been a significant rise in the trend of earning money through
online means. With the widespread…

14 min read · May 13

2.2K 25

See more recommendations

https://medium.com/@eric.perbos/how-to-win-a-kaggle-competition-in-data-science-via-coursera-part-1-5-592ff4bad624 23/23

You might also like