Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 083b5f5

Browse files
committed
Merge pull request scikit-learn#3231 from bwignall/quickfix-cap
CLN: Capitalize "Dirichlet" and "Mexican" in example docstrings
2 parents 45e2a44 + 9bee5b4 commit 083b5f5

File tree

3 files changed

+7
-7
lines changed

3 files changed

+7
-7
lines changed

examples/decomposition/plot_sparse_coding.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
Transform a signal as a sparse combination of Ricker wavelets. This example
77
visually compares different sparse coding methods using the
88
:class:`sklearn.decomposition.SparseCoder` estimator. The Ricker (also known
9-
as mexican hat or the second derivative of a Gaussian) is not a particularly
9+
as Mexican hat or the second derivative of a Gaussian) is not a particularly
1010
good kernel to represent piecewise constant signals like this one. It can
1111
therefore be seen how much adding different widths of atoms matters and it
1212
therefore motivates learning the dictionary to best fit your type of signals.
@@ -23,7 +23,7 @@
2323

2424

2525
def ricker_function(resolution, center, width):
26-
"""Discrete sub-sampled Ricker (mexican hat) wavelet"""
26+
"""Discrete sub-sampled Ricker (Mexican hat) wavelet"""
2727
x = np.linspace(0, resolution - 1, resolution)
2828
x = ((2 / ((np.sqrt(3 * width) * np.pi ** 1 / 4)))
2929
* (1 - ((x - center) ** 2 / width ** 2))
@@ -32,7 +32,7 @@ def ricker_function(resolution, center, width):
3232

3333

3434
def ricker_matrix(width, resolution, n_components):
35-
"""Dictionary of Ricker (mexican hat) wavelets"""
35+
"""Dictionary of Ricker (Mexican hat) wavelets"""
3636
centers = np.linspace(0, resolution - 1, n_components)
3737
D = np.empty((n_components, resolution))
3838
for i, center in enumerate(centers):

examples/mixture/plot_gmm.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
=================================
55
66
Plot the confidence ellipsoids of a mixture of two Gaussians with EM
7-
and variational dirichlet process.
7+
and variational Dirichlet process.
88
99
Both models have access to five components with which to fit the
1010
data. Note that the EM model will necessarily use all five components
@@ -15,7 +15,7 @@
1515
adapts it number of state automatically.
1616
1717
This example doesn't show it, as we're in a low-dimensional space, but
18-
another advantage of the dirichlet process model is that it can fit
18+
another advantage of the Dirichlet process model is that it can fit
1919
full covariance matrices effectively even when there are less examples
2020
per cluster than there are dimensions in the data, due to
2121
regularization properties of the inference algorithm.
@@ -42,7 +42,7 @@
4242
gmm = mixture.GMM(n_components=5, covariance_type='full')
4343
gmm.fit(X)
4444

45-
# Fit a dirichlet process mixture of Gaussians using five components
45+
# Fit a Dirichlet process mixture of Gaussians using five components
4646
dpgmm = mixture.DPGMM(n_components=5, covariance_type='full')
4747
dpgmm.fit(X)
4848

examples/mixture/plot_gmm_sin.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
by 100 points loosely spaced following a noisy sine curve. The fit by
99
the GMM class, using the expectation-maximization algorithm to fit a
1010
mixture of 10 Gaussian components, finds too-small components and very
11-
little structure. The fits by the dirichlet process, however, show
11+
little structure. The fits by the Dirichlet process, however, show
1212
that the model can either learn a global structure for the data (small
1313
alpha) or easily interpolate to finding relevant local structure
1414
(large alpha), never falling into the problems shown by the GMM class.

0 commit comments

Comments
 (0)