Thanks to visit codestin.com
Credit goes to github.com

Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion sklearn/kernel_ridge.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ class KernelRidge(MultiOutputMixin, RegressorMixin, BaseEstimator):
Kernel mapping used internally. This parameter is directly passed to
:class:`~sklearn.metrics.pairwise.pairwise_kernel`.
If `kernel` is a string, it must be one of the metrics
in `pairwise.PAIRWISE_KERNEL_FUNCTIONS`.
in `pairwise.PAIRWISE_KERNEL_FUNCTIONS` or "precomputed".
If `kernel` is "precomputed", X is assumed to be a kernel matrix.
Alternatively, if `kernel` is a callable function, it is called on
each pair of instances (rows) and the resulting value recorded. The
Expand Down
12 changes: 4 additions & 8 deletions sklearn/mixture/_gaussian_mixture.py
Original file line number Diff line number Diff line change
Expand Up @@ -473,14 +473,10 @@ class GaussianMixture(BaseMixture):
String describing the type of covariance parameters to use.
Must be one of:

'full'
each component has its own general covariance matrix
'tied'
all components share the same general covariance matrix
'diag'
each component has its own diagonal covariance matrix
'spherical'
each component has its own single variance
- 'full': each component has its own general covariance matrix.
- 'tied': all components share the same general covariance matrix.
- 'diag': each component has its own diagonal covariance matrix.
- 'spherical': each component has its own single variance.

tol : float, default=1e-3
The convergence threshold. EM iterations will stop when the
Expand Down
2 changes: 1 addition & 1 deletion sklearn/multiclass.py
Original file line number Diff line number Diff line change
Expand Up @@ -932,7 +932,7 @@ class OutputCodeClassifier(MetaEstimatorMixin, ClassifierMixin, BaseEstimator):
An estimator object implementing :term:`fit` and one of
:term:`decision_function` or :term:`predict_proba`.

code_size : float
code_size : float, default=1.5
Percentage of the number of classes to be used to create the code book.
A number between 0 and 1 will require fewer classifiers than
one-vs-the-rest. A number greater than 1 will require more classifiers
Expand Down
2 changes: 1 addition & 1 deletion sklearn/neighbors/_nearest_centroid.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ class NearestCentroid(ClassifierMixin, BaseEstimator):

Parameters
----------
metric : str or callable
metric : str or callable, default="euclidian"
The metric to use when calculating distance between instances in a
feature array. If metric is a string or callable, it must be one of
the options allowed by
Expand Down
28 changes: 11 additions & 17 deletions sklearn/preprocessing/_discretization.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,27 +37,21 @@ class KBinsDiscretizer(TransformerMixin, BaseEstimator):
encode : {'onehot', 'onehot-dense', 'ordinal'}, default='onehot'
Method used to encode the transformed result.

onehot
Encode the transformed result with one-hot encoding
and return a sparse matrix. Ignored features are always
stacked to the right.
onehot-dense
Encode the transformed result with one-hot encoding
and return a dense array. Ignored features are always
stacked to the right.
ordinal
Return the bin identifier encoded as an integer value.
- 'onehot': Encode the transformed result with one-hot encoding
and return a sparse matrix. Ignored features are always
stacked to the right.
- 'onehot-dense': Encode the transformed result with one-hot encoding
and return a dense array. Ignored features are always
stacked to the right.
- 'ordinal': Return the bin identifier encoded as an integer value.

strategy : {'uniform', 'quantile', 'kmeans'}, default='quantile'
Strategy used to define the widths of the bins.

uniform
All bins in each feature have identical widths.
quantile
All bins in each feature have the same number of points.
kmeans
Values in each bin have the same nearest center of a 1D k-means
cluster.
- 'uniform': All bins in each feature have identical widths.
- 'quantile': All bins in each feature have the same number of points.
- 'kmeans': Values in each bin have the same nearest center of a 1D
k-means cluster.

dtype : {np.float32, np.float64}, default=None
The desired data-type for the output. If None, output dtype is
Expand Down