Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@st--
Copy link
Member

@st-- st-- commented Apr 15, 2020

jahall and others added 30 commits November 16, 2019 11:28
Turns the Periodic kernel into a wrapper-kernel that takes any Stationary kernel as an argument `base` and turns it into a periodic version. Backwards-incompatible but more flexible and less code!
* quick fix of documentation mathematics

* Apply suggestions from code review

much clearer

Co-Authored-By: st-- <[email protected]>
cleanup, typos, revert to US spelling etc.
* refactor tabulate_module_summary
* add representation of prior to tabulate_module_summary (fixes #1117)
* make kernel interface consistent (uses X, X2 everywhere now)
* add ABC metaclass to Kernel baseclass
* fix bug in Kernel.on_separate_dims
* regression test
* Add jit flag to gpflow.optimizers.Scipy that wraps objective and gradient evaluation in tf.function()
Allows gpflow2 to work better with tf.function() when static shapes are unknown (e.g. when minibatching). Closes #1179 

Co-authored-by: marcoadurno <[email protected]>
…r() to ease debugging (#1201)

* Adds is_tensor_like property (returns True) to gpflow.Parameter to be compatible with TensorFlow 2.1.

* Changes gpflow.Parameter.__repr__ to identify itself as a gpflow object, not a tf.Tensor, to ease debugging and interactive use.
st-- and others added 23 commits March 31, 2020 13:17
This gives all `BayesianModel` subclasses a consistent interface both for optimization (MLE/MAP) and MCMC. Models are required to implement `maximum_log_likelihood_objective`, which is to be maximized for model training.

Optimization: The `_training_loss` method is defined as `- (maximum_log_likelihood_objective + log_prior_density)`. This is exposed by the InternalDataTrainingLossMixin and ExternalDataTrainingLossMixin classes.

For models that keep hold of the data internally, `training_loss` can directly be passed as a closure to an optimizer's `minimize`, for example:
```python
model = gpflow.models.GPR(data, ...)
gpflow.optimizers.Scipy().minimize(model.training_loss, model.trainable_variables)
```

If the model objective requires data to be passed in, a closure can be constructed on the fly using `model.training_loss_closure(data)`, which returns a no-argument closure:
```python
model = gpflow.models.SVGP(...)
gpflow.optimizers.Scipy().minimize(
    model.training_loss_closure(data), model.trainable_variables, ...
)
```

The training_loss_closure() method provided by both InternalDataTrainingLossMixin and ExternalDataTrainingLossMixin takes a boolean `compile` argument (default: True) that wraps the returned closure in tf.function(). Note that the return value should be cached in a variable if the minimize() step is run several times to avoid re-compilation in each step!

MCMC: The `log_posterior_density` method can be directly passed to the `SamplingHelper`. By default, `log_posterior_density` is implemented as `maximum_log_likelihood_objective + log_prior_density`. Models can override this if needed. Example:
```python
model = gpflow.models.GPMC(...)
hmc_helper = gpflow.optimizers.SamplingHelper(
    model.log_posterior_density, model.trainable_parameters
)
hmc = tfp.mcmc.HamiltonianMonteCarlo(
    target_log_prob_fn=hmc_helper.target_log_prob_fn, ...
)
```
In this case, the function that runs the MCMC chain should be wrapped in tf.function() (see MCMC notebook).
* Increase MCMC sampling

* Fix section links
Best practice demonstrated in our notebooks
…k. (#1382)

* Match original experiment setup from paper.

Replicate the original setup published in Fortuin and Ratsch (2019) and heavily comment the code with references to the paper.

* Run `make format` and reduce hyperparameters to ease computation.

* Format MSE and std to 2 decimals and sort randomly permuted indices.

* Describe experimental modifications made in the notebook.

* Plot target task training points.

* Plot prediction variance/uncertainty.

* Go with N=500, use more explicit name.

* Address comments from @st--
Addresses #1405 

* New structure underneath gpflow/likelihoods/:
  * base.py: all base classes (Likelihood, MonteCarloLikelihood, ScalarLikelihood) and SwitchedLikelihood
  * multiclass.py: multi-class classification (Softmax, MultiClass + RobustMax)
  * scalar_continuous.py: continuous-Y subclasses of ScalarLikelihood (Gaussian, StudentT, Exponential, Beta, Gamma)
  * scalar_discrete.py: discrete-Y subclasses of ScalarLikelihood (Bernoulli, Poisson, Ordinal)
  * utils.py: the `inv_probit` link function used by Bernoulli and Beta likelihoods
  * misc.py: GaussianMC - used for demonstration/tests only.
  (Note that usage, i.e. accessing gpflow.likelihoods.<LikelihoodClass>, has not changed.)

* Tests for multi-class classification likelihoods moved out into their own test module (including stubs for the missing MultiClass quadrature tests of #1091)

* Re-activates the quadrature tests for ScalarLikelihood subclasses with analytic variational_expectations/predict_log_density/predict_mean_and_var that inadvertently got disabled by #1334 

* Fixes a bug in Bernoulli._predict_log_density that was uncovered by these tests

* Fixes random seed for mock data generation in test_natural_gradient to make svgp_vs_gpr test pass again
…ers (#1408)

Addresses #1407.

* attempt to improve the error message in the gpflow.Parameter check on assigning a new value that is incompatible with the parameter's transform (e.g. a non-positive value to a parameter with a positive() transform)

* Gaussian likelihood: add explicit `__init__`-time check that variance > variance_lower_bound, add `__init__` docstring, move "default variance lower bound" magic number into class-level constant

* change gpflow.config's positive_minimum to always be a float (initialized to 0.0 by default)
Cache summary file writers and re-use them in ToTensorBoard monitor task. Fixes #1385
* move matplotlib import inside ImageToTensorBoard class so that it is not a hard dependency for GPflow (`import gpflow` does not require matplotlib, only instantiating `ImageToTensorBoard` does)
* split up monitor into base.py and tensorboard.py
* add matplotlib to extras_require
This gives the GPflow repository four issue templates:
* bugs (including performance and build issues)
* feature requests
* documentation issues
* other issues (pointing to the stackoverflow gpflow tag)

This will hopefully make new issues more easily addressable. :)

Co-authored-by: joelberkeley-pio <[email protected]>
* fix pyplot import for matplotlib 3.1.3 (closes #1423)
* apply same fix to other notebooks
The "other issue" template wasn't being pulled in by GitHub's "new issue chooser"(https://github.com/GPflow/GPflow/issues/new/choose) because of a parsing failure due to the use of quotes in the name: field, which is fixed by this PR (changed version obtained by putting the text with quotes into GitHub's "create an issue template" interface). Includes a few minor copyedits.
* fix kernel construction in multioutput notebook
* fix one more kernel in changepoints notebook
@codecov
Copy link

codecov bot commented Apr 15, 2020

Codecov Report

Merging #1436 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1436   +/-   ##
=======================================
  Coverage   95.39%   95.39%           
=======================================
  Files          82       82           
  Lines        3732     3732           
=======================================
  Hits         3560     3560           
  Misses        172      172           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3fc050d...3fc050d. Read the comment docs.

@st-- st-- requested a review from awav April 15, 2020 10:54
@st-- st-- merged commit 2358539 into master Apr 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.