diff --git a/doc/themes/scikit-learn-modern/static/css/theme.css b/doc/themes/scikit-learn-modern/static/css/theme.css index b143b1f8bb1e7..e651205bac18a 100644 --- a/doc/themes/scikit-learn-modern/static/css/theme.css +++ b/doc/themes/scikit-learn-modern/static/css/theme.css @@ -76,6 +76,7 @@ a code { img { max-width: 100%; + border-radius: 0.25rem; } span.highlighted { @@ -836,10 +837,6 @@ div.highlight:hover span.copybutton:hover { background-color: #20252B; } -div.body img.align-center { - max-width: 800px; -} - div.body img { max-width: 100%; height: unset!important; /* Needed because sphinx sets the height */ @@ -1210,16 +1207,34 @@ div.sk-sponsor-div, div.sk-testimonial-div { align-items: center; } -div.sk-sponsor-div-box, div.sk-testimonial-div-box { +div.sk-sponsor-div-box, div.sk-testimonial-div-box, +div.sk-doc-div-box { width: 100%; } +div.sk-doc-div { + display: flex; + flex-wrap: wrap; + justify-content: center; +} + +div.sk-doc-div-box { + padding: 0.30rem; + overflow: auto; +} + @media screen and (min-width: 500px) { div.sk-sponsor-div-box, div.sk-testimonial-div-box { width: 50%; } } +@media screen and (min-width: 1200px) { + div.sk-doc-div-box { + width: 50%; + } +} + table.sk-sponsor-table tr, table.sk-sponsor-table tr:nth-child(odd) { border-style: none; background-color: white; diff --git a/doc/tutorial/statistical_inference/finding_help.rst b/doc/tutorial/statistical_inference/finding_help.rst index 69026e2e5dbd2..fc41e6514a03b 100644 --- a/doc/tutorial/statistical_inference/finding_help.rst +++ b/doc/tutorial/statistical_inference/finding_help.rst @@ -27,6 +27,6 @@ Q&A communities with Machine Learning practitioners .. _`multiple subdomains for Machine Learning questions`: https://meta.stackexchange.com/q/130524 --- _'An excellent free online course for Machine Learning taught by Professor Andrew Ng of Stanford': https://www.coursera.org/learn/machine-learning +- `An excellent free online course for Machine Learning taught by Professor Andrew Ng of Stanford `_ --- _'Another excellent free online course that takes a more general approach to Artificial Intelligence': https://www.udacity.com/course/intro-to-artificial-intelligence--cs271 +- `Another excellent free online course that takes a more general approach to Artificial Intelligence `_ diff --git a/doc/tutorial/statistical_inference/model_selection.rst b/doc/tutorial/statistical_inference/model_selection.rst index fd8caaf370a8f..0668b83167198 100644 --- a/doc/tutorial/statistical_inference/model_selection.rst +++ b/doc/tutorial/statistical_inference/model_selection.rst @@ -180,23 +180,35 @@ scoring method. .. currentmodule:: sklearn.svm .. topic:: **Exercise** - :class: green - .. image:: /auto_examples/exercises/images/sphx_glr_plot_cv_digits_001.png - :target: ../../auto_examples/exercises/plot_cv_digits.html - :align: right - :scale: 90 + .. raw :: html + +
+
+ + On the digits dataset, plot the cross-validation score of a :class:`SVC` + estimator with an linear kernel as a function of parameter ``C`` (use a + logarithmic grid of points, from 1 to 10). - On the digits dataset, plot the cross-validation score of a :class:`SVC` - estimator with an linear kernel as a function of parameter ``C`` (use a - logarithmic grid of points, from 1 to 10). + .. literalinclude:: ../../auto_examples/exercises/plot_cv_digits.py + :lines: 13-23 - .. literalinclude:: ../../auto_examples/exercises/plot_cv_digits.py - :lines: 13-23 + .. raw :: html + +
+
+ + .. image:: /auto_examples/exercises/images/sphx_glr_plot_cv_digits_001.png + :target: ../../auto_examples/exercises/plot_cv_digits.html + :align: center + :scale: 90 - **Solution:** :ref:`sphx_glr_auto_examples_exercises_plot_cv_digits.py` + .. raw :: html +
+
+ **Solution:** :ref:`sphx_glr_auto_examples_exercises_plot_cv_digits.py` Grid-search and cross-validated estimators ============================================ @@ -273,7 +285,6 @@ These estimators are called similarly to their counterparts, with 'CV' appended to their name. .. topic:: **Exercise** - :class: green On the diabetes dataset, find the optimal regularization parameter alpha. diff --git a/doc/tutorial/statistical_inference/putting_together.rst b/doc/tutorial/statistical_inference/putting_together.rst index 5106958d77e96..6b251317ff4bd 100644 --- a/doc/tutorial/statistical_inference/putting_together.rst +++ b/doc/tutorial/statistical_inference/putting_together.rst @@ -11,16 +11,13 @@ Pipelining We have seen that some estimators can transform data and that some estimators can predict variables. We can also create combined estimators: -.. image:: ../../auto_examples/compose/images/sphx_glr_plot_digits_pipe_001.png - :target: ../../auto_examples/compose/plot_digits_pipe.html - :scale: 65 - :align: right - .. literalinclude:: ../../auto_examples/compose/plot_digits_pipe.py :lines: 23-63 - - +.. image:: ../../auto_examples/compose/images/sphx_glr_plot_digits_pipe_001.png + :target: ../../auto_examples/compose/plot_digits_pipe.html + :scale: 65 + :align: center Face recognition with eigenfaces ================================= @@ -40,20 +37,28 @@ The dataset used in this example is a preprocessed excerpt of the .. |eigenfaces| image:: ../../images/plot_face_recognition_2.png :scale: 50 -.. list-table:: - :class: centered +.. raw :: html + +
+
+ +

Prediction

+ +|prediction| - * +.. raw :: html - - |prediction| +
+
- - |eigenfaces| +

Eigenfaces

- * +|eigenfaces| - - **Prediction** +.. raw :: html - - **Eigenfaces** +
+
Expected results for the top 5 most represented people in the dataset:: diff --git a/doc/tutorial/statistical_inference/settings.rst b/doc/tutorial/statistical_inference/settings.rst index 0ca4c69f48f2e..9ae3222131bee 100644 --- a/doc/tutorial/statistical_inference/settings.rst +++ b/doc/tutorial/statistical_inference/settings.rst @@ -31,6 +31,11 @@ needs to be preprocessed in order to be used by scikit-learn. .. topic:: An example of reshaping data would be the digits dataset + .. raw :: html + +
+
+ The digits dataset is made of 1797 8x8 images of hand-written digits :: @@ -41,16 +46,25 @@ needs to be preprocessed in order to be used by scikit-learn. >>> plt.imshow(digits.images[-1], cmap=plt.cm.gray_r) #doctest: +SKIP - .. image:: /auto_examples/datasets/images/sphx_glr_plot_digits_last_image_001.png - :target: ../../auto_examples/datasets/plot_digits_last_image.html - :align: left - :scale: 60 - To use this dataset with scikit-learn, we transform each 8x8 image into a feature vector of length 64 :: >>> data = digits.images.reshape((digits.images.shape[0], -1)) + .. raw :: html + +
+
+ + .. image:: /auto_examples/datasets/images/sphx_glr_plot_digits_last_image_001.png + :target: ../../auto_examples/datasets/plot_digits_last_image.html + :align: center + + .. raw :: html + +
+
+ Estimators objects =================== diff --git a/doc/tutorial/statistical_inference/supervised_learning.rst b/doc/tutorial/statistical_inference/supervised_learning.rst index 9913829f8f054..8d2e16e513612 100644 --- a/doc/tutorial/statistical_inference/supervised_learning.rst +++ b/doc/tutorial/statistical_inference/supervised_learning.rst @@ -38,10 +38,10 @@ Nearest neighbor and the curse of dimensionality .. topic:: Classifying irises: - .. image:: /auto_examples/datasets/images/sphx_glr_plot_iris_dataset_001.png - :target: ../../auto_examples/datasets/plot_iris_dataset.html - :align: right - :scale: 65 + .. raw :: html + +
+
The iris dataset is a classification task consisting in identifying 3 different types of irises (Setosa, Versicolour, and Virginica) from @@ -53,6 +53,22 @@ Nearest neighbor and the curse of dimensionality >>> np.unique(iris_y) array([0, 1, 2]) + .. raw :: html + +
+
+ + .. image:: /auto_examples/datasets/images/sphx_glr_plot_iris_dataset_001.png + :target: ../../auto_examples/datasets/plot_iris_dataset.html + :align: center + :scale: 50 + + .. raw :: html + +
+
+ + k-Nearest neighbors classifier ------------------------------- @@ -155,10 +171,10 @@ in its simplest form, fits a linear model to the data set by adjusting a set of parameters in order to make the sum of the squared residuals of the model as small as possible. -.. image:: /auto_examples/linear_model/images/sphx_glr_plot_ols_001.png - :target: ../../auto_examples/linear_model/plot_ols.html - :scale: 40 - :align: right +.. raw :: html + +
+
Linear models: :math:`y = X\beta + \epsilon` @@ -167,6 +183,21 @@ Linear models: :math:`y = X\beta + \epsilon` * :math:`\beta`: Coefficients * :math:`\epsilon`: Observation noise +.. raw :: html + +
+
+ +.. image:: /auto_examples/linear_model/images/sphx_glr_plot_ols_001.png + :target: ../../auto_examples/linear_model/plot_ols.html + :scale: 50 + :align: center + +.. raw :: html + +
+
+ :: >>> from sklearn import linear_model @@ -197,10 +228,10 @@ Shrinkage If there are few data points per dimension, noise in the observations induces high variance: -.. image:: /auto_examples/linear_model/images/sphx_glr_plot_ols_ridge_variance_001.png - :target: ../../auto_examples/linear_model/plot_ols_ridge_variance.html - :scale: 70 - :align: right +.. raw :: html + +
+
:: @@ -219,6 +250,19 @@ induces high variance: ... plt.plot(test, regr.predict(test)) # doctest: +SKIP ... plt.scatter(this_X, y, s=3) # doctest: +SKIP +.. raw :: html + +
+
+ +.. image:: /auto_examples/linear_model/images/sphx_glr_plot_ols_ridge_variance_001.png + :target: ../../auto_examples/linear_model/plot_ols_ridge_variance.html + :align: center + +.. raw :: html + +
+
A solution in high-dimensional statistical learning is to *shrink* the @@ -226,10 +270,11 @@ regression coefficients to zero: any two randomly chosen set of observations are likely to be uncorrelated. This is called :class:`Ridge` regression: -.. image:: /auto_examples/linear_model/images/sphx_glr_plot_ols_ridge_variance_002.png - :target: ../../auto_examples/linear_model/plot_ols_ridge_variance.html - :scale: 70 - :align: right +.. raw :: html + +
+
+ :: @@ -244,6 +289,21 @@ regression: ... plt.plot(test, regr.predict(test)) # doctest: +SKIP ... plt.scatter(this_X, y, s=3) # doctest: +SKIP +.. raw :: html + +
+
+ +.. image:: /auto_examples/linear_model/images/sphx_glr_plot_ols_ridge_variance_002.png + :target: ../../auto_examples/linear_model/plot_ols_ridge_variance.html + :align: center + +.. raw :: html + +
+
+ + This is an example of **bias/variance tradeoff**: the larger the ridge ``alpha`` parameter, the higher the bias and the lower the variance. @@ -346,10 +406,10 @@ application of Occam's razor: *prefer simpler models*. Classification --------------- -.. image:: /auto_examples/linear_model/images/sphx_glr_plot_logistic_001.png - :target: ../../auto_examples/linear_model/plot_logistic.html - :scale: 65 - :align: right +.. raw :: html + +
+
For classification, as in the labeling `iris `_ task, linear @@ -357,6 +417,21 @@ regression is not the right approach as it will give too much weight to data far from the decision frontier. A linear approach is to fit a sigmoid function or **logistic** function: +.. raw :: html + +
+
+ +.. image:: /auto_examples/linear_model/images/sphx_glr_plot_logistic_001.png + :target: ../../auto_examples/linear_model/plot_logistic.html + :scale: 70 + :align: center + +.. raw :: html + +
+
+ .. math:: y = \textrm{sigmoid}(X\beta - \textrm{offset}) + \epsilon = @@ -373,6 +448,7 @@ This is known as :class:`LogisticRegression`. .. image:: /auto_examples/linear_model/images/sphx_glr_plot_iris_logistic_001.png :target: ../../auto_examples/linear_model/plot_iris_logistic.html :scale: 83 + :align: center .. topic:: Multiclass classification @@ -420,19 +496,31 @@ the separating line (less regularization). .. |svm_margin_unreg| image:: /auto_examples/svm/images/sphx_glr_plot_svm_margin_001.png :target: ../../auto_examples/svm/plot_svm_margin.html - :scale: 70 .. |svm_margin_reg| image:: /auto_examples/svm/images/sphx_glr_plot_svm_margin_002.png :target: ../../auto_examples/svm/plot_svm_margin.html - :scale: 70 -.. rst-class:: centered +.. raw :: html + +
+
+

Unregularized SVM

+ +|svm_margin_unreg| + +.. raw :: html + +
+
+

Regularized SVM (default)

+ +|svm_margin_reg| + +.. raw :: html + +
+
- ============================= ============================== - **Unregularized SVM** **Regularized SVM (default)** - ============================= ============================== - |svm_margin_unreg| |svm_margin_reg| - ============================= ============================== .. topic:: Example: @@ -459,7 +547,7 @@ classification --:class:`SVC` (Support Vector Classification). .. _using_kernels_tut: Using kernels --------------- +------------- Classes are not always linearly separable in feature space. The solution is to build a decision function that is not linear but may be polynomial instead. @@ -468,72 +556,58 @@ creating a decision energy by positioning *kernels* on observations: .. |svm_kernel_linear| image:: /auto_examples/svm/images/sphx_glr_plot_svm_kernels_001.png :target: ../../auto_examples/svm/plot_svm_kernels.html - :scale: 65 .. |svm_kernel_poly| image:: /auto_examples/svm/images/sphx_glr_plot_svm_kernels_002.png :target: ../../auto_examples/svm/plot_svm_kernels.html - :scale: 65 -.. rst-class:: centered - - .. list-table:: - - * - - - **Linear kernel** - - - **Polynomial kernel** - - - - * - - - |svm_kernel_linear| - - - |svm_kernel_poly| - - - - * - - - :: +.. |svm_kernel_rbf| image:: /auto_examples/svm/images/sphx_glr_plot_svm_kernels_003.png + :target: ../../auto_examples/svm/plot_svm_kernels.html - >>> svc = svm.SVC(kernel='linear') +.. raw :: html - - :: +
+
+

Linear kernel

- >>> svc = svm.SVC(kernel='poly', - ... degree=3) - >>> # degree: polynomial degree +|svm_kernel_linear| +:: + >>> svc = svm.SVC(kernel='linear') -.. |svm_kernel_rbf| image:: /auto_examples/svm/images/sphx_glr_plot_svm_kernels_003.png - :target: ../../auto_examples/svm/plot_svm_kernels.html - :scale: 65 -.. rst-class:: centered +.. raw :: html - .. list-table:: +
+
+

Polynomial kernel

- * +|svm_kernel_poly| - - **RBF kernel (Radial Basis Function)** +:: + >>> svc = svm.SVC(kernel='poly', + ... degree=3) + >>> # degree: polynomial degree - * +.. raw :: html - - |svm_kernel_rbf| +
+
+

RBF kernel (Radial Basis Function)

- * +|svm_kernel_rbf| - - :: +:: - >>> svc = svm.SVC(kernel='rbf') - >>> # gamma: inverse of size of - >>> # radial kernel + >>> svc = svm.SVC(kernel='rbf') + >>> # gamma: inverse of size of + >>> # radial kernel +.. raw :: html +
+
.. topic:: **Interactive example** @@ -543,7 +617,7 @@ creating a decision energy by positioning *kernels* on observations: .. image:: /auto_examples/datasets/images/sphx_glr_plot_iris_dataset_001.png :target: ../../auto_examples/datasets/plot_iris_dataset.html - :align: right + :align: center :scale: 70 .. topic:: **Exercise** diff --git a/doc/tutorial/statistical_inference/unsupervised_learning.rst b/doc/tutorial/statistical_inference/unsupervised_learning.rst index b87fb64ec8d9b..faa83fb3f3dff 100644 --- a/doc/tutorial/statistical_inference/unsupervised_learning.rst +++ b/doc/tutorial/statistical_inference/unsupervised_learning.rst @@ -60,42 +60,49 @@ algorithms. The simplest clustering algorithm is :ref:`k_means`. is sensitive to initialization, and can fall into local minima, although scikit-learn employs several tricks to mitigate this issue. - .. list-table:: - :class: centered + .. raw :: html - * +
+
- - |k_means_iris_bad_init| +

Bad initialization

- - |k_means_iris_8| + |k_means_iris_bad_init| - - |cluster_iris_truth| + .. raw :: html - * +
+
+

8 clusters

- - **Bad initialization** + |k_means_iris_8| - - **8 clusters** + .. raw :: html - - **Ground truth** +
+
+

Ground truth

+ + |cluster_iris_truth| + + .. raw :: html + +
+
**Don't over-interpret clustering results** .. |face| image:: /auto_examples/cluster/images/sphx_glr_plot_face_compress_001.png :target: ../../auto_examples/cluster/plot_face_compress.html - :scale: 60 .. |face_regular| image:: /auto_examples/cluster/images/sphx_glr_plot_face_compress_002.png :target: ../../auto_examples/cluster/plot_face_compress.html - :scale: 60 .. |face_compressed| image:: /auto_examples/cluster/images/sphx_glr_plot_face_compress_003.png :target: ../../auto_examples/cluster/plot_face_compress.html - :scale: 60 .. |face_histogram| image:: /auto_examples/cluster/images/sphx_glr_plot_face_compress_004.png :target: ../../auto_examples/cluster/plot_face_compress.html - :scale: 60 .. topic:: **Application example: vector quantization** @@ -120,28 +127,43 @@ algorithms. The simplest clustering algorithm is :ref:`k_means`. >>> face_compressed = np.choose(labels, values) >>> face_compressed.shape = face.shape - .. list-table:: - :class: centered + .. raw :: html + +
+
+ +

Raw image

+ + |face| + + .. raw :: html - * - - |face| +
+
+

K-means quantization

- - |face_compressed| + |face_compressed| - - |face_regular| + .. raw :: html - - |face_histogram| +
+
+

Equal bins

- * + |face_regular| - - Raw image + .. raw :: html - - K-means quantization +
+
+

Image histogram

- - Equal bins + |face_histogram| - - Image histogram + .. raw :: html +
+
Hierarchical agglomerative clustering: Ward ---------------------------------------------