Releases: talmolab/sleap-nn
SLEAP-NN v0.0.3
Summary
This release delivers critical bug fixes for multiprocessing support, enhanced tracking capabilities, and significant improvements to the inference workflow. The v0.0.3 release resolves HDF5 pickling issues that prevented proper multiprocessing on macOS/Windows, fixes ID models, and introduces new track cleaning parameters for better tracking performance.
Major changes
Fixed Multiprocessing Bug with num_workers > 0 (#359)
Resolved HDF5 pickling issues that prevented proper multiprocessing on macOS/Windows systems. This fix enables users to utilize multiple workers for faster data loading during training and inference when caching is enabled.
Fixed ID Models (#345)
Fixed minor issues with TopDown and BottomUp ID models.
- The ID models dataset classes were re-computing the tracks from the labels file. However, they should just grab it from the head config classes parameter.
- Fix shape mismatch issue with BottomUp ID models
Added Track Cleaning Arguments (#349)
Added new parameters for better track management and cleanup:
- tracking_clean_instance_count: Target number of instances to clean after tracking
- tracking_clean_iou_threshold: IOU threshold for cleaning overlapping instances
- tracking_pre_cull_to_target: Pre-culling instances before tracking
- tracking_pre_cull_iou_threshold: IOU threshold for pre-culling
Updated Installation Documentation (#348, #351)
Added comprehensive uv add installation instructions for modern Python package management instead of uv pip install method. Added warning for 3.14 python version to prevent installation issues.
Inference workflow enhancements (#360, #361)
Enhanced bottom-up model inference capabilities with improved performance and stability. Fix logger encoding issues on windows and better handle integral refinement error on mps accelerator.
Changelog
- Fix ID models by @gitttt-1234 in #345
- Fix changelog.md by @gitttt-1234 in #346
- Add warning for Python v3.14 by @gitttt-1234 in #348
- Add track cleaning args by @gitttt-1234 in #349
- Update uv add installation docs by @gitttt-1234 in #351
- Fix marimo usage docs by @gitttt-1234 in #352
- Fix target instance count parameter by @gitttt-1234 in #358
- Fix multiprocessing bug with num_workers>0 by @gitttt-1234 in #359
- Minor fixes to inference workflow by @gitttt-1234 in #360
- Update bottomup inference and add note on num_workers by @gitttt-1234 in #361
- Bump version to v0.0.3 by @gitttt-1234 in #362
SLEAP-NN v0.0.2
Summary
This release focuses on several bug fixes and improvements across the training, inference, and CLI components of sleap-nn. It includes bug fixes for model backbones and loaders, enhancements to the configuration and CLI experience, improved robustness in multi-GPU training, and new options for device selection and tracking. Documentation and installation guides have also been updated, along with internal refactors to streamline the code consistency.
Major changes
-
Backbones & Models:
- Fixed bugs in Swin Transformer and UNet backbone filter computations.
- Corrected weight mapping for legacy TopDown ID models.
-
Inference & Tracking:
- Removed unintended loading of pretrained weights during inference.
- Fixed inference with suggestion frames and improved stalling handling.
- Added option to run tracking on selected frames and video indices.
- Added thread-safe video access to prevent backend crashes.
- Added function to load metrics for better evaluation reporting.
-
Training Pipeline:
- Fixed bugs in the training workflow with the infinite dataloader handling.
- Improved seeding behavior for reproducible label splits in multi-GPU setups.
- Fixed experiment run name generation across multi-GPU workers.
-
CLI & Config:
- Introduced unified sleap-nn CLI with subcommands (train, track, eval) and more robust help injection.
- Removed deprecated CLI commands and cleaned up legacy imports.
- Added option to specify which devices to use, with auto-selection of GPUs based on available memory.
- Updated sample configs and sleap-io skeleton function usage.
- Minor parameter name and default updates for consistency with SLEAP.
-
Documentation & Installation:
- Fixed broken documentation pages and improved menu structure.
- Updated installation instructions with CUDA support for uv-based workflows.
What's Changed
- Fix Bug for SwinT Backbone Model by @7174Andy in #304
- More robust help injection in CLI by @tom21100227 in #303
- Remove loading pretrained weights during inference pipeline by @gitttt-1234 in #305
- Remove
backimport in lightning module by @gitttt-1234 in #312 - Fix compute filters in unet by @gitttt-1234 in #313
- Update CLI commands by @gitttt-1234 in #314
- Update sleap-io skeleton functions usage by @gitttt-1234 in #315
- Minor updates to config parameters by @gitttt-1234 in #316
- Minor bug fixes by @gitttt-1234 in #317
- Fix Inference on SuggestionFrames by @7174Andy in #318
- Add pck to voc metrics by @gitttt-1234 in #320
- Fix bugs in training pipeline by @gitttt-1234 in #322
- Add option to specify which devices to use by @gitttt-1234 in #327
- Fix bug in infinite data loader by @gitttt-1234 in #325
- Add thread-safe video access by @gitttt-1234 in #326
- Fix bugs in docs by @gitttt-1234 in #319
- Change zmq address to port arguments by @gitttt-1234 in #328
- Add option to run tracking on select frames by @gitttt-1234 in #329
- Fix seeding in training workflow by @gitttt-1234 in #330
- Fix inference stalling by @gitttt-1234 in #331
- Make wandb artifact logging optional by @gitttt-1234 in #332
- Auto-select GPUs by @gitttt-1234 in #333
- Add function to load metrics by @gitttt-1234 in #334
- Fix experiment run name in multi-gpu training by @gitttt-1234 in #336
- Add option to pass labels and video objects by @gitttt-1234 in #337
- Fix mapping for legacy topdown id models by @gitttt-1234 in #339
- Modify uv installation docs for cuda support by @gitttt-1234 in #340
- Update sample configs by @gitttt-1234 in #338
- Bump up sleap-nn version for v0.0.2 by @gitttt-1234 in #341
Full Changelog: v0.0.1...v0.0.2
SLEAP-NN v0.0.1
SLEAP-NN v0.0.1 - Initial Release
SLEAP-NN is a PyTorch-based deep learning framework for pose estimation, built on top of the SLEAP (Social LEAP Estimates Animal Poses) platform. This framework provides efficient training, inference, and evaluation tools for multi-animal pose estimation tasks.
Documentation: https://nn.sleap.ai/
Quick start
# Install with PyTorch CPU support
pip install sleap-nn[torch-cpu]
# Train a model
sleap-nn train --config-name config.yaml --config-dir configs/
# Run inference
sleap-nn track --model_paths model.ckpt --data_path video.mp4
# Evaluate predictions
sleap-nn eval --ground_truth_path gt.slp --predicted_path pred.slp
What's Changed
- Core Data Loader Implementation by @davidasamy in #4
- Add centroid finder block by @davidasamy in #7
- Add DataBlocks for rotation and scaling by @gitttt-1234 in #8
- Refactor datapipes by @talmo in #9
- Instance Cropping by @davidasamy in #13
- Add more Kornia augmentations by @alckasoc in #12
- Confidence Map Generation by @davidasamy in #11
- Peak finding by @alckasoc in #14
- UNet Implementation by @alckasoc in #15
- Top-down Centered-instance Pipeline by @alckasoc in #16
- Adding ruff to ci.yml by @alckasoc in #21
- Implement base Model and Head classes by @alckasoc in #17
- Add option to Filter to user instances by @gitttt-1234 in #20
- Add Evaluation Module by @gitttt-1234 in #22
- Add metadata to dictionary by @gitttt-1234 in #24
- Added SingleInstanceConfmapsPipeline by @alckasoc in #23
- modify keys by @gitttt-1234 in #31
- Small fix to find_global_peaks_rough by @alckasoc in #28
- Add trainer by @gitttt-1234 in #29
- PAF Grouping by @alckasoc in #33
- Add predictor class by @gitttt-1234 in #36
- Edge Maps by @alckasoc in #38
- Add ConvNext Backbone by @gitttt-1234 in #40
- Add VideoReader by @gitttt-1234 in #45
- Refactor model pipeline by @gitttt-1234 in #51
- Add BottomUp model pipeline by @gitttt-1234 in #52
- Remove Part-names and Edge dependency in config by @gitttt-1234 in #54
- Refactor model config by @gitttt-1234 in #61
- Refactor Augmentation config by @gitttt-1234 in #67
- Add minimal pretrained checkpoints for tests and fix PAF grouping interpolation by @gqcpm in #73
- Fix augmentation in TopdownConfmaps pipeline by @gitttt-1234 in #86
- Implement tracker module by @gitttt-1234 in #87
- Resume training and automatically compute crop size for TopDownConfmaps pipeline by @gitttt-1234 in #88
- LitData Refactor PR1: Get individual functions for data pipelines by @gitttt-1234 in #90
- Add function to load trained weights for backbone model by @gitttt-1234 in #95
- Remove IterDataPipe from Inference pipeline by @gitttt-1234 in #96
- Move ld.optimize to a subprocess by @gitttt-1234 in #100
- Auto compute max height and width from labels by @gitttt-1234 in #101
- Fix sizematcher in Inference data pipline by @gitttt-1234 in #102
- Convert Tensor images to PIL by @gitttt-1234 in #105
- Add threshold mode in config for learning rate scheduler by @gitttt-1234 in #106
- Add option to specify
.binfile directory in config by @gitttt-1234 in #107 - Add StepLR scheduler by @gitttt-1234 in #109
- Add config to WandB by @gitttt-1234 in #113
- Add option to load trained weights for Head layers by @gitttt-1234 in #114
- Add option to load ckpts for backbone and head for running inference by @gitttt-1234 in #115
- Add option to reuse
.binfiles by @gitttt-1234 in #116 - Fix Normalization order in data pipelines by @gitttt-1234 in #118
- Add torch Dataset classes by @gitttt-1234 in #120
- Fix Pafs shape by @gitttt-1234 in #121
- Add caching to Torch Datasets pipeline by @gitttt-1234 in #123
- Remove
random_cropaugmentation by @gitttt-1234 in #124 - Generate np chunks for caching by @gitttt-1234 in #125
- Add
groupto wandb config by @gitttt-1234 in #126 - Fix crop size by @gitttt-1234 in #127
- Resize images before cropping in Centered-instance model by @gitttt-1234 in #129
- Check memory before caching by @gitttt-1234 in #130
- Replace
evalwith an explicit mapping dictionary by @gitttt-1234 in #131 - Add
CyclerDataLoaderto ensure minimum steps per epoch by @gitttt-1234 in #132 - Fix running inference on Bottom-up models with CUDA by @gitttt-1234 in #133
- Fix caching in datasets by @gitttt-1234 in #134
- Save
.slpfile after inference by @gitttt-1234 in #135 - Add option to reuse np chunks by @gitttt-1234 in #136
- Filter instances while generating indices by @gitttt-1234 in #138
- Fix config format while logging to wandb by @gitttt-1234 in #144
- Add multi-gpu support by @gitttt-1234 in #145
- Implement Omegaconfig PR1: basic functionality by @gqcpm in #97
- Move all params to config by @gitttt-1234 in #146
- Add output stride to backbone config by @gitttt-1234 in #147
- Change backbone config structure by @gitttt-1234 in #149
- Add an entry point train function by @gitttt-1234 in #150
- Add logger by @gqcpm in #148
- Fix preprocessing during inference by @gitttt-1234 in #156
- Add CLI for training by @gitttt-1234 in #155
- Specify custom anchor index in Inference pipeline by @gitttt-1234 in #157
- Fix lr scheduler config by @gitttt-1234 in #158
- Add max stride to Convnext and Swint backbones by @gitttt-1234 in #159
- Fix length in custom datasets by @gitttt-1234 in #160
- Add
scaleargument to custom datasets by @gitttt-1234 in #166 - Fix size matcher by @gitttt-1234 in #167
- Fix max instances in TopDown Inference by @gitttt-1234 in #168
- Move lightning modules by @gitttt-1234 in #169
- Save config with chunks by @gitttt-1234 in #174
- Add profiler and strategy parameters by @gitttt-1234 in #175
- Add docker img for remote dev by @gitttt-1234 in #176
- Save files only in rank: 0 by @gitttt-1234 in #177
- Minor changes to validate configs by @gitttt-1234 in #179
- Fix multi-gpu training by @gitttt-1234 in #184
- Cache only images by @gitttt-1234 in #186
- Add a new data pipeline strategy without caching by @gitttt-1234 in #187
- Minor fixes to lightning modules by @gitttt-1234 in #189
- Fix caching when imgs path already exist by @gitttt-1234 in #191
- Ensure caching of images to...