Thanks to visit codestin.com
Credit goes to github.com

Skip to content

DOC Linked examples for clustering algorithms in their docstrings (#26927) #30127

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Feb 16, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 6 additions & 3 deletions sklearn/cluster/_affinity_propagation.py
Original file line number Diff line number Diff line change
Expand Up @@ -398,9 +398,6 @@ class AffinityPropagation(ClusterMixin, BaseEstimator):

Notes
-----
For an example usage,
see :ref:`sphx_glr_auto_examples_cluster_plot_affinity_propagation.py`.

The algorithmic complexity of affinity propagation is quadratic
in the number of points.

Expand Down Expand Up @@ -442,6 +439,12 @@ class AffinityPropagation(ClusterMixin, BaseEstimator):
>>> clustering.cluster_centers_
array([[1, 2],
[4, 2]])

For an example usage,
see :ref:`sphx_glr_auto_examples_cluster_plot_affinity_propagation.py`.

For a comparison of Affinity Propagation with other clustering algorithms, see
:ref:`sphx_glr_auto_examples_cluster_plot_cluster_comparison.py`
"""

_parameter_constraints: dict = {
Expand Down
3 changes: 3 additions & 0 deletions sklearn/cluster/_agglomerative.py
Original file line number Diff line number Diff line change
Expand Up @@ -925,6 +925,9 @@ class AgglomerativeClustering(ClusterMixin, BaseEstimator):
AgglomerativeClustering()
>>> clustering.labels_
array([1, 1, 1, 0, 0, 0])

For a comparison of Agglomerative clustering with other clustering algorithms, see
:ref:`sphx_glr_auto_examples_cluster_plot_cluster_comparison.py`
"""

_parameter_constraints: dict = {
Expand Down
3 changes: 3 additions & 0 deletions sklearn/cluster/_birch.py
Original file line number Diff line number Diff line change
Expand Up @@ -483,6 +483,9 @@ class Birch(
Birch(n_clusters=None)
>>> brc.predict(X)
array([0, 0, 0, 1, 1, 1])

For a comparison of the BIRCH clustering algorithm with other clustering algorithms,
see :ref:`sphx_glr_auto_examples_cluster_plot_cluster_comparison.py`
"""

_parameter_constraints: dict = {
Expand Down
9 changes: 6 additions & 3 deletions sklearn/cluster/_dbscan.py
Original file line number Diff line number Diff line change
Expand Up @@ -277,9 +277,6 @@ class DBSCAN(ClusterMixin, BaseEstimator):

Notes
-----
For an example, see
:ref:`sphx_glr_auto_examples_cluster_plot_dbscan.py`.

This implementation bulk-computes all neighborhood queries, which increases
the memory complexity to O(n.d) where d is the average number of neighbors,
while original DBSCAN had memory complexity O(n). It may attract a higher
Expand Down Expand Up @@ -322,6 +319,12 @@ class DBSCAN(ClusterMixin, BaseEstimator):
array([ 0, 0, 0, 1, 1, -1])
>>> clustering
DBSCAN(eps=3, min_samples=2)

For an example, see
:ref:`sphx_glr_auto_examples_cluster_plot_dbscan.py`.

For a comparison of DBSCAN with other clustering algorithms, see
:ref:`sphx_glr_auto_examples_cluster_plot_cluster_comparison.py`
"""

_parameter_constraints: dict = {
Expand Down
4 changes: 0 additions & 4 deletions sklearn/cluster/_hdbscan/hdbscan.py
Original file line number Diff line number Diff line change
Expand Up @@ -427,10 +427,6 @@ class HDBSCAN(ClusterMixin, BaseEstimator):
:class:`~sklearn.cluster.DBSCAN`), and be more robust to parameter selection.
Read more in the :ref:`User Guide <hdbscan>`.

For an example of how to use HDBSCAN, as well as a comparison to
:class:`~sklearn.cluster.DBSCAN`, please see the :ref:`plotting demo
<sphx_glr_auto_examples_cluster_plot_hdbscan.py>`.

.. versionadded:: 1.3

Parameters
Expand Down
3 changes: 3 additions & 0 deletions sklearn/cluster/_kmeans.py
Original file line number Diff line number Diff line change
Expand Up @@ -1873,6 +1873,9 @@ class MiniBatchKMeans(_BaseKMeans):
[1.06896552, 1. ]])
>>> kmeans.predict([[0, 0], [4, 4]])
array([1, 0], dtype=int32)

For a comparison of Mini-Batch K-Means clustering with other clustering algorithms,
see :ref:`sphx_glr_auto_examples_cluster_plot_cluster_comparison.py`
"""

_parameter_constraints: dict = {
Expand Down
3 changes: 3 additions & 0 deletions sklearn/cluster/_mean_shift.py
Original file line number Diff line number Diff line change
Expand Up @@ -432,6 +432,9 @@ class MeanShift(ClusterMixin, BaseEstimator):
array([1, 0])
>>> clustering
MeanShift(bandwidth=2)

For a comparison of Mean Shift clustering with other clustering algorithms, see
:ref:`sphx_glr_auto_examples_cluster_plot_cluster_comparison.py`
"""

_parameter_constraints: dict = {
Expand Down
3 changes: 3 additions & 0 deletions sklearn/cluster/_optics.py
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,9 @@ class OPTICS(ClusterMixin, BaseEstimator):

For a more detailed example see
:ref:`sphx_glr_auto_examples_cluster_plot_optics.py`.

For a comparison of OPTICS with other clustering algorithms, see
:ref:`sphx_glr_auto_examples_cluster_plot_cluster_comparison.py`
"""

_parameter_constraints: dict = {
Expand Down
3 changes: 3 additions & 0 deletions sklearn/cluster/_spectral.py
Original file line number Diff line number Diff line change
Expand Up @@ -601,6 +601,9 @@ class SpectralClustering(ClusterMixin, BaseEstimator):
>>> clustering
SpectralClustering(assign_labels='discretize', n_clusters=2,
random_state=0)

For a comparison of Spectral clustering with other clustering algorithms, see
:ref:`sphx_glr_auto_examples_cluster_plot_cluster_comparison.py`
"""

_parameter_constraints: dict = {
Expand Down
3 changes: 3 additions & 0 deletions sklearn/mixture/_gaussian_mixture.py
Original file line number Diff line number Diff line change
Expand Up @@ -693,6 +693,9 @@ class GaussianMixture(BaseMixture):
[ 1., 2.]])
>>> gm.predict([[0, 0], [12, 3]])
array([1, 0])

For a comparison of Gaussian Mixture with other clustering algorithms, see
:ref:`sphx_glr_auto_examples_cluster_plot_cluster_comparison.py`
"""

_parameter_constraints: dict = {
Expand Down
Loading