Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Repository for Fit Pixels, Get Labels: Meta learned implicit networks for medical image segmentation (MICCAI'25 ORAL)

License

Notifications You must be signed in to change notification settings

kushalvyas/Fit-Pixels-Get-Labels_MetaSeg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fit Pixels, Get Labels: Meta-learned Implicit Networks for Image Segmentation (MetaSeg)

Repository for Fit Pixels, Get Labels: Meta learned implicit networks for medical image segmentation (MICCAI'25 Best paper award recipient, ORAL presentation).

Project page | Paper | Openreview | Demo (coming soon!)

metaseg_coverfigure

Installation instructions.

Please install the following packages for running MetaSeg. You can find then in requirements.txt. Feel free to create a seperate virtual venv/conda environment and then install specific python packages.

pip install -r requirements.txt

Our codebase (MetaSeg) also depends on the Alpine INR library. Please install that as well.

git clone https://github.com/kushalvyas/alpine/
cd alpine
pip install .

Instructions on how to prepare dataset.

We provide the train/va/test splits in config/oasis_splits.json. Please prepend the correct directory paths as per your system configuration in the json files.

Instructions to run code // FIT pixels, GET labels:

runtime

Each experiment is its own jupyter notebook.

  1. 2D segmentation (5 class): run metaseg_2d_5classes.ipynb
  2. 2D segmentation (24 class): run metaseg_2d_24classes.ipynb
  3. 3D Segmentation (5 class): run metaseg_3d_5classes.ipynb ( will update repository with 3D code soon!)

For visualization: We also provide a script to visualize the principal components of learned MetaSeg features. Please find that in metaseg_vis_pca.ipynb.

Dataset:

We use the OASIS-MRI neurite dataset. This is part of the bigger OASIS-MRI dataset. For 2D segmentation, images are preprocessed to remain the full size of 192 x 192, while for 3D, we downsample our volumes to 80 x 80 x 100 for computational feasibilty.

Baselines:

We use the U-Net proposed by Buda et.al. from PyTorch Hub for 2D MRI segmentation and SegRestNet proposed by Myronenko et.al using the MONAI package. Additionally, for 3D INR segmentation, we also use the NISF baseline proposed by Stolt-Ansó et.al. Please refer to the respective codebases to run any baselines.

Citation

If you find our code or work useful, please consider citing us!

@InProceedings{VyaKus_Fit_MICCAI2025,
    author = { Vyas, Kushal AND Veeraraghavan, Ashok AND Balakrishnan, Guha},
    title = { { Fit Pixels, Get Labels: Meta-Learned Implicit Networks for Image Segmentation } },
    booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
    year = {2025},
    publisher = {Springer Nature Switzerland},
    volume = {LNCS 15962},
    month = {September},
    page = {194 -- 203}
}

About

Repository for Fit Pixels, Get Labels: Meta learned implicit networks for medical image segmentation (MICCAI'25 ORAL)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published