Releases: mlr-org/mlr3torch
Releases · mlr-org/mlr3torch
v0.3.2
mlr3torch 0.3.1
0.3.0
Breaking Changes:
- The output dimension of neural networks for binary classification tasks is now
expected to be 1 and not 2 as before. The behavior ofnn("head")was also changed to match this.
This means that for binary classification tasks,t_loss("cross_entropy")now generates
nn_bce_with_logits_lossinstead ofnn_cross_entropy_loss.
This also came with a reparametrization of thet_loss("cross_entropy")loss (thanks to @tdhock, #374).
New Features:
PipeOps & Learners:
- Added
po("nn_identity") - Added
po("nn_fn")for calling custom functions in a network. - Added the FT Transformer model for tabular data.
- Added encoders for numericals and categoricals
nn("block")(which allows to repeat the same network segment multiple
times) now has an extra argumenttrafo, which allows to modify the
parameter values per layer.
Callbacks:
- The context for callbacks now includes the network prediction (
y_hat). - The
lr_one_cyclecallback now infers the total number of steps. - Progress callback got argument
digitsfor controlling the precision
with which validation/training scores are logged.
Other:
TorchIngressTokennow also can take aSelectoras argumentfeatures.- Added function
lazy_shape()to get the shape of a lazy tensor. - Better error messages for MLP and TabResNet learners.
- TabResNet learner now supports lazy tensors.
- The
LearnerTorchbase class now supports the private method$.ingress_tokens(task, param_vals)
for generating thetorch::dataset. - Shapes can now have multiple
NAs and not only the batch dimension can be missing. However, mostnn()operators
still expect only one missing values and will throw an error if multiple dimensions are unknown. - Training now does not fail anymore when encountering a missing value
during validation but usesNAinstead. - It is now possible to specify parameter groups for optimizers via the
param_groupsparameter.
0.2.1
See NEWS.md
0.2.0
Breaking Changes
- Removed some optimizers for which no fast ('ignite') variant exists.
- The default optimizer is now AdamW instead of Adam.
- The private
LearnerTorch$.dataloader()method now operates no longer
on thetaskbut on thedatasetgenerated by the privateLearnerTorch$.dataset()method. - The
shuffleparameter during model training is now initialized toTRUEto sidestep
issues where data is sorted.
Performance Improvements
- Optimizers now use the faster ('ignite') version of the optimizers,
which leads to considerable speed improvements. - The
jit_traceparameter was added toLearnerTorch, which when set to
TRUEcan lead to significant speedups.
This should only be enabled for 'static' models, see the
torch tutorial
for more information. - Added parameter
num_interop_threadstoLearnerTorch. - The
tensor_datasetparameter was added, which allows to stack all batches
at the beginning of training to make loading of batches afterwards faster. - Use a faster default image loader.
Features
- Added
PipeOpfor adaptive average pooling. - The
n_layersparameter was added to the MLP learner. - Added multimodal melanoma and cifar{10, 100} example tasks.
- Added a callback to iteratively unfreeze parameters for finetuning.
- Added different learning rate schedulers as callbacks.
Bug Fixes:
- Torch learners can now be used with
AutoTuner. - Early stopping now not uses
epochs - patiencefor the internally tuned
values instead of the trained number ofepochsas it was before. - The
datasetof a learner must no longer return the tensors on the specifieddevice,
which allows for parallel dataloading on GPUs. PipeOpBlockshould no longer create ID clashes with other PipeOps in the graph (#260).
0.1.2
v0.1.2 release 0.1.2
0.1.1
v0.1.1 cran release (#287)
0.1.0
Initial CRAN release