Welcome to the new and improved MOOSE (v3.0), where speed and efficiency aren't just buzzwordsβthey're a way of life.
π¨ 3x Faster Than Before
Like a moose sprinting through the woods (okay, maybe not that fast), MOOSE 3.0 is built for speed. It's 3x faster than its older sibling, MOOSE 2.0, which was already no slouch. Blink and you'll miss it. β‘
π» Memory: Light as a Feather, Strong as a Bull
Forget "Does it fit on my laptop?" The answer is YES. πΊ Thanks to Dask wizardry, all that data stays in memory. No disk writes, no fuss. Run total-body CT on that 'decent' laptop you bought three years ago and feel like youβve upgraded. π₯³
π οΈ Any OS, Anytime, Anywhere
Windows, Mac, Linuxβwe donβt play favorites. π Mac users, youβre in luck: MOOSE runs natively on MPS, getting you GPU-like speeds without the NVIDIA guilt. π
π― Trained to Perfection
This is our best model yet, trained on a whopping 1.7k datasets. More data, better results. Plus you can run multiple models at the same time - You'll be slicing through images like a knife through warm butter. (Or tofu, if you prefer.) π§πͺ
π₯οΈ The 'Herd' Mode π₯οΈ
Got a powerhouse server just sitting around? Time to let the herd loose! Flip the Herd Mode switch and watch MOOSE multiply across your compute like... well, like a herd of moose! π¦π¦π¦ The more hardware you have, the faster your inference gets done. Scale up, speed up, and make every bit of your server earn its oats. πΎπ¨
MOOSE 3.0 isn't just an upgradeβit's a lifestyle. A faster, leaner, and stronger lifestyle. Ready to join the herd? π¦β¨
moose-pr.mp4
MOOSE 3.0 offers a wide range of segmentation models catering to various clinical and preclinical needs. Here are the models currently available:
| Model Name | Intensities and Regions |
|---|---|
clin_ct_body |
1:Legs, 2:Body, 3:Head, 4:Arms |
clin_ct_cardiac |
1: heart_myocardium, 2: heart_atrium_left, 3: heart_atrium_right, 4: heart_ventricle_left, 5: heart_ventricle_right, 6: aorta, 7: iliac_artery_left, 8: iliac_artery_right, 9: iliac_vena_left, 10: iliac_vena_right, 11: inferior_vena_cava, 12: portal_splenic_vein, 13: pulmonary_artery |
clin_ct_digestive |
1: colon, 2: duodenum, 3: esophagus, 4: small_bowel |
clin_ct_lungs |
1:lung_upper_lobe_left, 2:lung_lower_lobe_left, 3:lung_upper_lobe_right, 4:lung_middle_lobe_right, 5:lung_lower_lobe_right |
clin_ct_muscles |
1: autochthon_left, 2: autochthon_right, 3: gluteus_maximus_left, 4: gluteus_maximus_right, 5: gluteus_medius_left, 6: gluteus_medius_right, 7: gluteus_minimus_left, 8: gluteus_minimus_right, 9: iliopsoas_left, 10: iliopsoas_right |
clin_ct_organs |
1: adrenal_gland_left, 2: adrenal_gland_right, 3: bladder, 4: brain, 5: gallbladder, 6: kidney_left, 7: kidney_right, 8: liver, 9: lung_lower_lobe_left, 10: lung_lower_lobe_right, 11: lung_middle_lobe_right, 12: lung_upper_lobe_left, 13: lung_upper_lobe_right, 14: pancreas, 15: spleen, 16: stomach, 17: thyroid_left, 18: thyroid_right, 19: trachea |
clin_ct_peripheral_bones |
1: carpal_left, 2: carpal_right, 3: clavicle_left, 4: clavicle_right, 5: femur_left, 6: femur_right, 7: fibula_left, 8: fibula_right, 9: fingers_left, 10: fingers_right, 11: humerus_left, 12: humerus_right, 13: metacarpal_left, 14: metacarpal_right, 15: metatarsal_left, 16: metatarsal_right, 17: patella_left, 18: patella_right, 19: radius_left, 20: radius_right, 21: scapula_left, 22: scapula_right, 23: skull, 24: tarsal_left, 25: tarsal_right, 26: tibia_left, 27: tibia_right, 28: toes_left, 29: toes_right, 30: ulna_left, 31: ulna_right |
clin_ct_ribs |
1: rib_left_1, 2: rib_left_2, 3: rib_left_3, 4: rib_left_4, 5: rib_left_5, 6: rib_left_6, 7: rib_left_7, 8: rib_left_8, 9: rib_left_9, 10: rib_left_10, 11: rib_left_11, 12: rib_left_12, 13: rib_left_13, 14: rib_right_1, 15: rib_right_2, 16: rib_right_3, 17: rib_right_4, 18: rib_right_5, 19: rib_right_6, 20: rib_right_7, 21: rib_right_8, 22: rib_right_9, 23: rib_right_10, 24: rib_right_11, 25: rib_right_12, 26: rib_right_13, 27: sternum |
clin_ct_vertebrae |
1: vertebra_C1, 2: vertebra_C2, 3: vertebra_C3, 4: vertebra_C4, 5: vertebra_C5, 6: vertebra_C6, 7: vertebra_C7, 8: vertebra_T1, 9: vertebra_T2, 10: vertebra_T3, 11: vertebra_T4, 12: vertebra_T5, 13: vertebra_T6, 14: vertebra_T7, 15: vertebra_T8, 16: vertebra_T9, 17: vertebra_T10, 18: vertebra_T11, 19: vertebra_T12, 20: vertebra_L1, 21: vertebra_L2, 22: vertebra_L3, 23: vertebra_L4, 24: vertebra_L5, 25: vertebra_L6, 26: hip_left, 27: hip_right, 28: sacrum |
clin_ct_body_composition |
1: skeletal_muscle, 2: subcutaneous_fat, 3: visceral_fat |
| Model Name | Intensities and Regions |
|---|---|
preclin_ct_legs |
1:right_leg_muscle, 2:left_leg_muscle |
preclin_mr_all |
1:Brain, 2:Liver, 3:Intestines, 4:Pancreas, 5:Thyroid, 6:Spleen, 7:Bladder, 8:OuterKidney, 9:InnerKidney, 10:HeartInside, 11:HeartOutside, 12:WAT Subcutaneous, 13:WAT Visceral, 14:BAT, 15:Muscle TF, 16:Muscle TB, 17:Muscle BB, 18:Muscle BF, 19:Aorta, 20:Lung, 21:Stomach |
Each model is designed to provide high-quality segmentation with MOOSE 3.0's optimized algorithms and data-centric AI principles.
- Ferrara, D., Pires, M., Gutschmayer, S., et al. (2025). Sharing a whole-/total-body [18F]FDG-PET/CT dataset with CT-derived segmentations: An ENHANCE.PET initiative. PREPRINT (Version 1). Research Square. https://doi.org/10.21203/rs.3.rs-7169062/v1
- Shiyam Sundar, L. K., Yu, J., Muzik, O., Kulterer, O., Fueger, B. J., Kifjak, D., Nakuz, T., Shin, H. M., Sima, A. K., Kitzmantl, D., Badawi, R. D., Nardo, L., Cherry, S. R., Spencer, B. A., Hacker, M., & Beyer, T. (2022). Fully-automated, semantic segmentation of whole-body 18F-FDG PET/CT images based on data-centric artificial intelligence. Journal of Nuclear Medicine. https://doi.org/10.2967/jnumed.122.264063
- Isensee, F., Jaeger, P.F., Kohl, S.A.A. et al. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 18, 203β211 (2021). https://doi.org/10.1038/s41592-020-01008-z
Before you dive into the incredible world of MOOSE 3.0, here are a few things you need to ensure for an optimal experience:
-
Operating System: We've got you covered whether you're on Windows, Mac, or Linux. MOOSE 3.0 has been tested across these platforms to ensure seamless operation.
-
Memory: MOOSE 3.0 has quite an appetite! Make sure you have at least 16GB of RAM for the smooth running of all tasks.
-
GPU: If speed is your game, an NVIDIA GPU is the name! MOOSE 3.0 leverages GPU acceleration to deliver results fast. Don't worry if you don't have one, though - it will still work, just at a slower pace.
-
Python: Ensure that you have Python 3.10 installed on your system. MOOSE 3.0 likes to keep up with the latest, after all!
So, that's it! Make sure you're geared up with these specifications, and you're all set to explore everything MOOSE 3.0 has to offer. ππ
Available on Windows, Linux, and MacOS, the installation is as simple as it gets. Follow our step-by-step guide below and set sail on your journey with MOOSE 3.0.
-
First, create a Python environment. You can name it to your liking; for example, 'moose-env'.
python3.10 -m venv moose-env
-
Activate your newly created environment.
source moose-env/bin/activate # for Linux
-
Install MOOSE 3.0.
pip install moosez
Voila! You're all set to explore with MOOSE 3.0.
Yes, it works. But you'll need to follow these steps carefully. Grab a β or πΊ β this may take a few minutes.
-
Create and activate a virtual environment (We recommend Python 3.10 for stability)
python3.10 -m venv moose-env source moose-env/bin/activate -
Install MOOSE and the MPS-compatible PyTorch fork
Youβll need a special PyTorch build tailored for Appleβs Metal backend (MPS), which doesnβt use CUDA.
pip install moosez pip uninstall torch # ensures clean install; avoids conflicts with Moose-installed version git clone https://github.com/LalithShiyam/pytorch-mps.git cd pytorch-mps
-
Fix your CMake version (IMPORTANT
β οΈ )Do not use CMake 4.x β it will break the build due to compatibility issues with
protobuf.Check your version:
cmake --version
If it's 4.0 or higher, downgrade to a compatible version (e.g., 3.29.2):
pip uninstall cmake -y pip install cmake==3.29.2
-
Build the custom PyTorch fork for MPS
This will build PyTorch without CUDA (which Apple Silicon doesnβt support anyway):
USE_CUDA=0 python setup.py develop --verbose 2>&1 | tee build.log
β This may take some time. If it completes without errors, youβre good to go.
-
Patch
nnUNetTrainer.py(one-time fix)Due to differences in PyTorch exports,
nnUNetmay crash with:ImportError: cannot import name 'GradScaler' from 'torch'
To fix it:
-
Open the following file inside your moose-env folder:
~/moose-env/lib/python3.10/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py -
Replace this line 43:
from torch import GradScaler
with:
from torch.cuda.amp import GradScaler
β Thatβs it!
Now youβre ready to use MOOSE on Apple Silicon with MPS acceleration. πβ‘ If anything crashes, blame the silicon godsβ¦ or just open an issue. We're here to help.
-
Create a Python environment. You could name it 'moose-env', or as you wish.
python3.10 -m venv moose-env
-
Activate your newly created environment.
.\moose-env\Scripts\activate
-
Go to the PyTorch website and install the appropriate PyTorch version for your system. !DO NOT SKIP THIS!
-
Finally, install MOOSE 3.0.
pip install moosez
There you have it! You're ready to venture into the world of 3D medical image segmentation with MOOSE 3.0.
Happy exploring! ππ¬
Getting started with MOOSE 3.0 is as easy as slicing through butter π§πͺ. Use the command-line tool to process multiple segmentation models in sequence or in parallel, making your workflow a breeze. π¬οΈ
You can now run single or several models in sequence with a single command. Just provide the path to your subject images and list the segmentation models you wish to apply:
# For single model inference
moosez -d <path_to_image_dir> -m <model_name>
# For multiple model inference
moosez -d <path_to_image_dir> \
-m <model_name1> \
<model_name2> \
<model_name3> \For instance, to run clinical CT organ segmentation on a directory of images, you can use the following command:
moosez -d <path_to_image_dir> -m clin_ct_organsLikewise, to run multiple models e.g. organs, ribs, and vertebrae, you can use the following command:
moosez -d <path_to_image_dir> \
-m clin_ct_organs \
clin_ct_ribs \
clin_ct_vertebraeMOOSE 3.0 will handle each model one after the otherβno fuss, no hassle. πβ¨
Got a powerful server or HPC? Let the herd roam! π¦π Use Herd Mode to run multiple MOOSE instances in parallel. Just add the -herd flag with the number of instances you wish to run simultaneously:
moosez -d <path_to_image_dir> \
-m clin_ct_organs \
clin_ct_ribs \
clin_ct_vertebrae \
-herd 2MOOSE will run two instances at the same time, utilizing your compute power like a true multitasking pro. πͺπ¨βπ»π©βπ»
And that's it! MOOSE 3.0 lets you process with ease and speed. β‘β¨
We finally got permissions from our partners to open-source our training data, you can download (~250 GB) of our training data via moosez directly! And please cite our preprint if you use the dataset - we went through a lot of pain to do this.
moosez -dtd -dd <path_to_dir_where_you_want_to_store_the_data>Need assistance along the way? Don't worry, we've got you covered. Simply type:
moosez -hThis command will provide you with all the help and the information about the available models and the regions it segments.
MOOSE 3.0 isn't just a command-line powerhouse; itβs also a flexible library for Python projects. Hereβs how to make the most of it:
First, import the moose function from the moosez package in your Python script:
from moosez import mooseThe moose function is versatile and accepts various input types. It takes four main arguments:
input: The data to process, which can be:- A path to an input file or directory (NIfTI, either
.niior.nii.gz). - A tuple containing a NumPy array and its spacing (e.g.,
numpy_array,(spacing_x, spacing_y, spacing_z)). - A
SimpleITKimage object.
- A path to an input file or directory (NIfTI, either
model_names: A single model name or a list of model names for segmentation.output_dir: The directory where the results will be saved.accelerator: The type of accelerator to use ("cpu","cuda", or"mps"for Mac).
Here are some examples to illustrate different ways to use the moose function:
-
Using a file path and multiple models:
moose('/path/to/input/file', ['clin_ct_organs', 'clin_ct_ribs'], '/path/to/save/output', 'cuda')
-
Using a NumPy array with spacing:
moose((numpy_array, (1.5, 1.5, 1.5)), 'clin_ct_organs', '/path/to/save/output', 'cuda')
-
Using a SimpleITK image:
moose(simple_itk_image, 'clin_ct_organs', '/path/to/save/output', 'cuda')
To use the moose() function, ensure that you wrap the function call within a main guard to prevent recursive process creation errors:
from moosez import moose
if __name__ == '__main__':
input_file = '/path/to/input/file'
models = ['clin_ct_organs', 'clin_ct_ribs']
output_directory = '/path/to/save/output'
accelerator = 'cuda'
moose(input_file, models, output_directory, accelerator)That's it! With these flexible inputs, you can use MOOSE 3.0 to fit your workflow perfectlyβwhether youβre processing a single image, a stack of files, or leveraging different data formats. π₯οΈπ
Happy segmenting with MOOSE 3.0! π¦π«
Using MOOSE 3.0 optimally requires your data to be structured according to specific conventions. MOOSE 3.0 supports both DICOM and NIFTI formats. For DICOM files, MOOSE infers the modality from the DICOM tags and checks if the given modality is suitable for the chosen segmentation model. However, for NIFTI files, users need to ensure that the files are named with the correct modality as a suffix.
Please structure your dataset as follows:
MOOSEv2_data/ π
βββ S1 π
β βββ AC-CT π
β β βββ WBACCTiDose2_2001_CT001.dcm π
β β βββ WBACCTiDose2_2001_CT002.dcm π
β β βββ ... ποΈ
β β βββ WBACCTiDose2_2001_CT532.dcm π
β βββ AC-PT π
β βββ DetailWB_CTACWBPT001_PT001.dcm π
β βββ DetailWB_CTACWBPT001_PT002.dcm π
β βββ ... ποΈ
β βββ DetailWB_CTACWBPT001_PT532.dcm π
βββ S2 π
β βββ CT_S2.nii π
βββ S3 π
β βββ CT_S3.nii π
βββ S4 π
β βββ S4_ULD_FDG_60m_Dynamic_Patlak_HeadNeckThoAbd_20211025075852_2.nii π
βββ S5 π
βββ CT_S5.nii π
Note: If the necessary naming conventions are not followed, MOOSE 3.0 will skip the subjects.
When using NIFTI files, you should name the file with the appropriate modality as a suffix.
For instance, if you have chosen the model_name as clin_ct_organs, the CT scan for subject 'S2' in NIFTI format, should have the modality tag 'CT_' attached to the file name, e.g. CT_S2.nii. In the directory shown above, every subject will be processed by moosez except S4.
Remember: Adhering to these file naming and directory structure conventions ensures smooth and efficient processing with MOOSE 3.0. Happy segmenting! π
All of our Python packages here at QIMP carry a special signature β a distinctive 'Z' at the end of their names. The 'Z' is more than just a letter to us; it's a symbol of our forward-thinking approach and commitment to continuous innovation.
Our MOOSE package, for example, is named as 'moosez', pronounced "moose-see". So, why 'Z'?
Well, in the world of mathematics and science, 'Z' often represents the unknown, the variable that's yet to be discovered, or the final destination in a series. We at QIMP believe in always pushing boundaries, venturing into uncharted territories, and staying on the cutting edge of technology. The 'Z' embodies this philosophy. It represents our constant quest to uncover what lies beyond the known, to explore the undiscovered, and to bring you the future of medical imaging.
Each time you see a 'Z' in one of our package names, be reminded of the spirit of exploration and discovery that drives our work. With QIMP, you're not just installing a package; you're joining us on a journey to the frontiers of medical image processing. Here's to exploring the 'Z' dimension together! π