Thanks to visit codestin.com
Credit goes to github.com

Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 18 additions & 18 deletions sklearn/gaussian_process/_gpr.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ class GaussianProcessRegressor(MultiOutputMixin,
GaussianProcessRegressor:

* allows prediction without prior fitting (based on the GP prior)
* provides an additional method sample_y(X), which evaluates samples
* provides an additional method `sample_y(X)`, which evaluates samples
drawn from the GPR (prior or posterior) at given inputs
* exposes a method log_marginal_likelihood(theta), which can be used
* exposes a method `log_marginal_likelihood(theta)`, which can be used
externally for other ways of selecting hyperparameters, e.g., via
Markov chain Monte Carlo.

Expand Down Expand Up @@ -68,8 +68,8 @@ class GaussianProcessRegressor(MultiOutputMixin,
must have the signature::

def optimizer(obj_func, initial_theta, bounds):
# * 'obj_func' is the objective function to be minimized, which
# takes the hyperparameters theta as parameter and an
# * 'obj_func': the objective function to be minimized, which
# takes the hyperparameters theta as a parameter and an
# optional flag eval_gradient, which determines if the
# gradient is returned additionally to the function value
# * 'initial_theta': the initial value for theta, which can be
Expand All @@ -80,7 +80,7 @@ def optimizer(obj_func, initial_theta, bounds):
# the corresponding value of the target function.
return theta_opt, func_min

Per default, the 'L-BGFS-B' algorithm from scipy.optimize.minimize
Per default, the 'L-BFGS-B' algorithm from scipy.optimize.minimize
is used. If None is passed, the kernel's parameters are kept fixed.
Available internal optimizers are::

Expand Down Expand Up @@ -113,7 +113,7 @@ def optimizer(obj_func, initial_theta, bounds):
random_state : int, RandomState instance or None, default=None
Determines random number generation used to initialize the centers.
Pass an int for reproducible results across multiple function calls.
See :term: `Glossary <random_state>`.
See :term:`Glossary <random_state>`.

Attributes
----------
Expand Down Expand Up @@ -211,8 +211,8 @@ def fit(self, X, y):
if self.alpha.shape[0] == 1:
self.alpha = self.alpha[0]
else:
raise ValueError("alpha must be a scalar or an array"
" with same number of entries as y.(%d != %d)"
raise ValueError("alpha must be a scalar or an array "
"with same number of entries as y. (%d != %d)"
Comment on lines +214 to +215
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a code change. Given the title of the PR, I would revert.

% (self.alpha.shape[0], y.shape[0]))

self.X_train_ = np.copy(X) if self.copy_X_train else X
Expand Down Expand Up @@ -283,9 +283,9 @@ def predict(self, X, return_std=False, return_cov=False):
"""Predict using the Gaussian process regression model

We can also predict based on an unfitted model by using the GP prior.
In addition to the mean of the predictive distribution, also its
standard deviation (return_std=True) or covariance (return_cov=True).
Note that at most one of the two can be requested.
In addition to the mean of the predictive distribution, optionally also
returns its standard deviation (`return_std=True`) or covariance
(`return_cov=True`). Note that at most one of the two can be requested.

Parameters
----------
Expand All @@ -302,7 +302,7 @@ def predict(self, X, return_std=False, return_cov=False):

Returns
-------
y_mean : ndarray of shape (n_samples, [n_output_dims])
y_mean : ndarray of shape (n_samples,) or (n_samples, n_targets)
Mean of predictive distribution a query points.

y_std : ndarray of shape (n_samples,), optional
Expand All @@ -315,8 +315,7 @@ def predict(self, X, return_std=False, return_cov=False):
"""
if return_std and return_cov:
raise RuntimeError(
"Not returning standard deviation of predictions when "
"returning full covariance.")
"At most one of return_std or return_cov can be requested.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a code change. Given the title of the PR, I would revert.


if self.kernel is None or self.kernel.requires_vector_input:
X = self._validate_data(X, ensure_2d=True, dtype="numeric",
Expand Down Expand Up @@ -389,21 +388,22 @@ def sample_y(self, X, n_samples=1, random_state=0):

Parameters
----------
X : array-like of shape (n_samples, n_features) or list of object
X : array-like of shape (n_samples_X, n_features) or list of object
Query points where the GP is evaluated.

n_samples : int, default=1
The number of samples drawn from the Gaussian process
Number of samples drawn from the Gaussian process per query point

random_state : int, RandomState instance or None, default=0
Determines random number generation to randomly draw samples.
Pass an int for reproducible results across multiple function
calls.
See :term: `Glossary <random_state>`.
See :term:`Glossary <random_state>`.

Returns
-------
y_samples : ndarray of shape (n_samples_X, [n_output_dims], n_samples)
y_samples : ndarray of shape (n_samples_X, n_samples), or \
(n_samples_X, n_targets, n_samples)
Values of n_samples samples drawn from Gaussian process and
evaluated at query points.
"""
Expand Down