Test-Time Domain Generalization via Universe Learning: A Multi-Graph Matching Approach for Medical Image Segmentation
Xingguo Lv, Xingbo Dong, Liwen Wang, Jiewen Yang, Lei Zhao, Bin Pu, Zhe Jin, Xuejun Li
Official PyTorch implementation of the CVPR 2025 paper (score: 44444)
- 🚀 State-of-the-art performance on Retinal Fundus Segmentation and Polyp Segmentation Datasets
- ⚡ Efficient implementation with Pytorch and Mask-RCNN
- 🔧 Easy-to-use training/evaluation scripts
- 📦 Pre-trained models available
- Python ≥ 3.6
- PyTorch ≥ 1.5 and torchvision that matches the PyTorch installation.
- Detectron2 == 0.5
conda create -n ttdg python=3.7 -y
conda activate ttdg
pip install -r requirements.txt
Follow the INSTALL.md to install Detectron2.
-
The preprocessed data can be downloaded from Google Drive.
-
Organize the annotations/masks as the COCO annotation format.
{
"images": [
{"id": 1, "width": 640, "height": 480, "file_name": "000001.jpg"}
],
"annotations": [
{
"id": 1,
"image_id": 1,
"category_id": 1,
"segmentation": [[100, 100, 150, 100, 150, 150, 100, 150]],
"area": 2500,
"bbox": [100, 100, 50, 50],
"iscrowd": 0
}
],
"categories": [
{"id": 1, "name": "person", "supercategory": "human"}
]
}
- Organize dataset structure:
datasets/
└── Fundus/
├── Drishti_GS/
│ ├── test/
│ │ └── image/
│ └── train/
│ └── image/
├── ORIGA/
├── RIM_ONE_r3/
├── REFUGE/
├── REFUGE_Valid/
│ └── image/
├── Drishti_GS_test.json
├── Drishti_GS_train.json
├── ORIGA_test.json
├── ORIGA_train.json
└── ...
python train_net.py --eval-only --config configs/test_segment.yaml \
MODEL.WEIGHTS <your weight>.pth
Download pre-trained models from Google Drive
python train_net.py \
--num-gpus 1 \
--config configs/seg_res50fpn_source.yaml\
OUTPUT_DIR output/<name>
If you use this work in your research or wish to refer to the results published in the paper, please use the following BibTeX entry.
@article{lv2025test,
title={Test-Time Domain Generalization via Universe Learning: A Multi-Graph Matching Approach for Medical Image Segmentation},
author={Lv, Xingguo and Dong, Xingbo and Wang, Liwen and Yang, Jiewen and Zhao, Lei and Pu, Bin and Jin, Zhe and Li, Xuejun},
journal={arXiv preprint arXiv:2503.13012},
year={2025}
}
We gratefully acknowledge the following open-source projects that inspired or contributed to our implementation: