Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Releases: mlr-org/mlr3torch

v0.3.2

06 Nov 08:13

Choose a tag to compare

Bug Fixes

  • t_opt("adamw") now actually uses AdamW and not Adam.
  • Caching: Cache directory is now created, even if its parent
    directory does not exist.
  • Add mlr3torch to mlr_reflections$loaded_packages to fix errors when using mlr3torch in parallel.

mlr3torch 0.3.1

26 Aug 15:08

Choose a tag to compare

Bug Fixes

  • FT Transformer can now be (un-)marshaled after being trained on categorical data (#412).
  • Parameters (batch)-sampler now work (#420, thanks @tdhock)

Features

  • Better error messages.

0.3.0

07 Jul 12:49

Choose a tag to compare

Breaking Changes:

  • The output dimension of neural networks for binary classification tasks is now
    expected to be 1 and not 2 as before. The behavior of nn("head") was also changed to match this.
    This means that for binary classification tasks, t_loss("cross_entropy") now generates
    nn_bce_with_logits_loss instead of nn_cross_entropy_loss.
    This also came with a reparametrization of the t_loss("cross_entropy") loss (thanks to @tdhock, #374).

New Features:

PipeOps & Learners:

  • Added po("nn_identity")
  • Added po("nn_fn") for calling custom functions in a network.
  • Added the FT Transformer model for tabular data.
  • Added encoders for numericals and categoricals
  • nn("block") (which allows to repeat the same network segment multiple
    times) now has an extra argument trafo, which allows to modify the
    parameter values per layer.

Callbacks:

  • The context for callbacks now includes the network prediction (y_hat).
  • The lr_one_cycle callback now infers the total number of steps.
  • Progress callback got argument digits for controlling the precision
    with which validation/training scores are logged.

Other:

  • TorchIngressToken now also can take a Selector as argument features.
  • Added function lazy_shape() to get the shape of a lazy tensor.
  • Better error messages for MLP and TabResNet learners.
  • TabResNet learner now supports lazy tensors.
  • The LearnerTorch base class now supports the private method $.ingress_tokens(task, param_vals)
    for generating the torch::dataset.
  • Shapes can now have multiple NAs and not only the batch dimension can be missing. However, most nn() operators
    still expect only one missing values and will throw an error if multiple dimensions are unknown.
  • Training now does not fail anymore when encountering a missing value
    during validation but uses NA instead.
  • It is now possible to specify parameter groups for optimizers via the
    param_groups parameter.

0.2.1

13 Feb 17:44

Choose a tag to compare

See NEWS.md

0.2.0

07 Feb 09:09

Choose a tag to compare

Breaking Changes

  • Removed some optimizers for which no fast ('ignite') variant exists.
  • The default optimizer is now AdamW instead of Adam.
  • The private LearnerTorch$.dataloader() method now operates no longer
    on the task but on the dataset generated by the private LearnerTorch$.dataset() method.
  • The shuffle parameter during model training is now initialized to TRUE to sidestep
    issues where data is sorted.

Performance Improvements

  • Optimizers now use the faster ('ignite') version of the optimizers,
    which leads to considerable speed improvements.
  • The jit_trace parameter was added to LearnerTorch, which when set to
    TRUE can lead to significant speedups.
    This should only be enabled for 'static' models, see the
    torch tutorial
    for more information.
  • Added parameter num_interop_threads to LearnerTorch.
  • The tensor_dataset parameter was added, which allows to stack all batches
    at the beginning of training to make loading of batches afterwards faster.
  • Use a faster default image loader.

Features

  • Added PipeOp for adaptive average pooling.
  • The n_layers parameter was added to the MLP learner.
  • Added multimodal melanoma and cifar{10, 100} example tasks.
  • Added a callback to iteratively unfreeze parameters for finetuning.
  • Added different learning rate schedulers as callbacks.

Bug Fixes:

  • Torch learners can now be used with AutoTuner.
  • Early stopping now not uses epochs - patience for the internally tuned
    values instead of the trained number of epochs as it was before.
  • The dataset of a learner must no longer return the tensors on the specified device,
    which allows for parallel dataloading on GPUs.
  • PipeOpBlock should no longer create ID clashes with other PipeOps in the graph (#260).

0.1.2

15 Oct 10:31
2efffc8

Choose a tag to compare

v0.1.2

release 0.1.2

0.1.1

07 Oct 11:25
05e6926

Choose a tag to compare

v0.1.1

cran release (#287)

0.1.0

08 Jul 06:18
d37da44

Choose a tag to compare

Initial CRAN release