Thanks to visit codestin.com
Credit goes to github.com

Skip to content

0.2.0

Choose a tag to compare

@sebffischer sebffischer released this 07 Feb 09:09
· 84 commits to main since this release

Breaking Changes

  • Removed some optimizers for which no fast ('ignite') variant exists.
  • The default optimizer is now AdamW instead of Adam.
  • The private LearnerTorch$.dataloader() method now operates no longer
    on the task but on the dataset generated by the private LearnerTorch$.dataset() method.
  • The shuffle parameter during model training is now initialized to TRUE to sidestep
    issues where data is sorted.

Performance Improvements

  • Optimizers now use the faster ('ignite') version of the optimizers,
    which leads to considerable speed improvements.
  • The jit_trace parameter was added to LearnerTorch, which when set to
    TRUE can lead to significant speedups.
    This should only be enabled for 'static' models, see the
    torch tutorial
    for more information.
  • Added parameter num_interop_threads to LearnerTorch.
  • The tensor_dataset parameter was added, which allows to stack all batches
    at the beginning of training to make loading of batches afterwards faster.
  • Use a faster default image loader.

Features

  • Added PipeOp for adaptive average pooling.
  • The n_layers parameter was added to the MLP learner.
  • Added multimodal melanoma and cifar{10, 100} example tasks.
  • Added a callback to iteratively unfreeze parameters for finetuning.
  • Added different learning rate schedulers as callbacks.

Bug Fixes:

  • Torch learners can now be used with AutoTuner.
  • Early stopping now not uses epochs - patience for the internally tuned
    values instead of the trained number of epochs as it was before.
  • The dataset of a learner must no longer return the tensors on the specified device,
    which allows for parallel dataloading on GPUs.
  • PipeOpBlock should no longer create ID clashes with other PipeOps in the graph (#260).