Two Heads Are Enough: DualU-Net, a Fast and Efficient Architecture for Cell Classification and Segmentation
DualU-Net is a multi-task deep learning model for nuclei segmentation and classification. It employs a dual-decoder design that predicts both pixel-wise segmentation maps and Gaussian-based centroid density maps, enabling fast and accurate instance segmentation in histopathological images.
Read the MIDL25 paper (Oral Presentation)
Note: This repository is under active development. Please anticipate frequent changes.
DualU-Net has been evaluated on several public datasets:
- PanNuke: A dataset for tissue and nuclei segmentation in histopathology. Link
- CoNSeP: A colorectal cancer nuclei segmentation dataset. Link
| Dataset | Encoder | Detection F1 | Classification F1 | Dice | Checkpoint Link |
|---|---|---|---|---|---|
| CoNSeP | ResNeXt50_32x4d | 0.72 | 0.56 | 0.76 | Download |
| CoNSeP | ConvNeXt_base | 0.72 | 0.54 | 0.80 | Download |
| PanNuke | ResNeXt50_32x4d | 0.80 | 0.54 | 0.77 | Download |
| PanNuke | ConvNeXt_base | 0.80 | 0.55 | 0.74 | Download |
The provided PanNuke checkpoints are trained on one fold of the dataset, validated on another, and tested on the third. The naming convention of the checkpoints follows this pattern:
pannuke-combined-{encoder}-{train_fold}{val_fold}.pth
{encoder}: Specifies the encoder used in the model (e.g.,convnext).{train_fold}: Indicates the fold used for training.{val_fold}: Indicates the fold used for validation.
For example, the checkpoint:
pannuke-combined-convnext-23.pth
- Trained on fold 2
- Validated on fold 3
- Tested on fold 1
- Uses ConvNeXt as the encoder
For CoNSeP, the checkpoint follows a simpler naming convention:
consep-combined-{encoder}.pth
{encoder}: Specifies the encoder used in the model (e.g.,convnext).
For example:
consep-combined.pth
- Trained on the Train CoNSeP dataset
- Uses a combined training approach
-
Clone this repository:
git clone https://github.com/davidanglada/DualU-Net.git
-
Create a virtual environment:
python -m venv dualunet-env source dualunet-env/bin/activate -
Install required dependencies:
pip install -r requirements.txt
DualU-Net/
├── dual_unet/ # Main module containing core functionalities
│ ├── datasets/ # Dataset building from COCO, transforms and augmentation
│ ├── eval/ # Detection and segmentation evaluation functions.
│ ├── models/ # Contains model architectures and related utilities
│ ├── utils/ # Contains utility functions
│ ├── __init__.py # Initialization file for the dual_unet module
│ └── engine.py # Train and evaluation functions
├── configs/ # Configuration files
│ ├── train_config.yaml # Configuration file for training
│ ├── eval_config.yaml # Configuration file for evaluation
├── eval.py # Script for evaluating the model
├── train.py # Script for training the model
├── requirements.txt # List of required Python packages
└── README.md # Project documentation
Below are the basic commands to get you started with DualU-Net.
python train.py --config configs/config_train.yamlpython eval.py --config configs/config_eval.yamlpython inference.py --config configs/config_inference.yamlIf you find this work helpful in your research, please consider citing us:
@inproceedings{
anglada-rotger2025two,
title={Two Heads Are Enough: DualU-Net, a Fast and Efficient Architecture for Nuclei Instance Segmentation},
author={David Anglada-Rotger and Berta Jansat and Ferran Marques and Montse Pard{\`a}s},
booktitle={Medical Imaging with Deep Learning},
year={2025},
url={https://openreview.net/forum?id=lK0CklgxQd}
}This project (all code and non-code assets) is released under the
Creative Commons Attribution – NonCommercial 4.0 International License.