Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 0a78858

Browse files
committed
Merge branch 'dev'
2 parents 9310b9a + 2eb6ef9 commit 0a78858

File tree

229 files changed

+40221
-2392
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

229 files changed

+40221
-2392
lines changed

.gitignore

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,12 @@
1-
PyMIC_data/ACDC/
2-
PyMIC_data/Fetal_HC
3-
PyMIC_data/JSRT
4-
PyMIC_data/MyoPS
5-
PyMIC_data/Promise12
1+
PyMIC_data/
2+
*/*/model/
3+
*/*/result/
4+
seg_semi_sup/BraTS19_WT/
5+
seg_semi_sup/FLARE/
6+
seg_semi_sup/AtriaSeg_dev/
7+
*.pptx
8+
*.xlsx
9+
*.zip
10+
*.pyc
11+
*~*
12+
*_dev.sh

README.md

Lines changed: 46 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,12 @@
11
# PyMIC_examples
2-
[PyMIC][PyMIC_link] is a PyTorch-based toolkit for medical image computing with annotation-efficient deep learning. Here we provide a set of examples to show how it can be used for image classification and segmentation tasks. For annotation efficient learning, we show examples of Semi-Supervised Learning (SSL), Weakly Supervised Learning (WSL) and Noisy Label Learning (NLL), respectively. For beginners, you can follow the examples by just editting the configuration files for model training, testing and evaluation. For advanced users, you can easily develop your own modules, such as customized networks and loss functions.
2+
[PyMIC][PyMIC_link] is a PyTorch-based toolkit for medical image computing with annotation-efficient deep learning. Here we provide a set of examples to show how it can be used for image classification and segmentation tasks. For annotation efficient learning, we show examples of Semi-Supervised Learning (SSL), Self-Supervised Learning (Self-SL), Weakly Supervised Learning (WSL) and Noisy Label Learning (NLL), respectively. For beginners, you can follow the examples by just editting the configuration files for model training, testing and evaluation. For advanced users, you can easily develop your own modules, such as customized networks and loss functions.
3+
4+
## News
5+
2024.01 Examples of Self-Supervised Learning have been added.
6+
7+
2024.01 More 2D segmentation networks including SwinUNet and TransUNet have been added.
8+
9+
2023.12 Semi-Supervised Method MCNet has been added to [seg_semi_sup/ACDC][ssl_acdc_link].
310

411
## Install PyMIC
512
The released version of PyMIC (v0.4.0) is required for these examples, and it can be installed by:
@@ -24,30 +31,50 @@ Currently we provide the following examples in this repository:
2431
|Catetory|Example|Remarks|
2532
|---|---|---|
2633
|Classification|[AntBee][AntBee_link]|Finetuning a resnet18 for Ant and Bee classification|
27-
|Classification|[CHNCXR][CHNCXR_link]|Finetuning restnet18 and vgg16 for normal/tuberculosis X-ray image classification|
28-
|Fully supervised segmentation|[JSRT][JSRT_link]|Using a 2D UNet for lung segmentation from chest X-ray images|
29-
|Fully supervised segmentation|[JSRT2][JSRT2_link]|Using a customized network and loss function for the JSRT dataset|
30-
|Fully supervised segmentation|[Fetal_HC][fetal_hc_link]|Using a 2D UNet for fetal head segmentation from 2D ultrasound images|
31-
|Fully supervised segmentation|[Prostate][prostate_link]|Using a 3D UNet for prostate segmentation from 3D MRI|
32-
|Semi-supervised segmentation|[seg_ssl/ACDC][ssl_acdc_link]|Semi-supervised methods for heart structure segmentation using 2D CNNs|
33-
|Semi-supervised segmentation|[seg_ssl/AtriaSeg][ssl_atrial_link]|Semi-supervised methods for left atrial segmentation using 3D CNNs|
34-
|Weakly-supervised segmentation|[seg_wsl/ACDC][wsl_acdc_link]|Segmentation of heart structure with scrible annotations|
35-
|Noisy label learning|[seg_nll/JSRT][nll_jsrt_link]|Comparing different NLL methods for learning from noisy labels|
34+
| |[CHNCXR][CHNCXR_link]|Finetuning restnet18 and vgg16 for normal/tuberculosis X-ray image classification|
35+
|Fully supervised segmentation|[JSRT][JSRT_link]|Using five 2D Networks for lung segmentation from chest X-ray images|
36+
| |[JSRT2][JSRT2_link]|Using a customized network and loss function for the JSRT dataset|
37+
| |[Fetal_HC][fetal_hc_link]|Using a 2D UNet for fetal head segmentation from 2D ultrasound images|
38+
| |[Prostate][prostate_link]|Using a 3D UNet for prostate segmentation from 3D MRI|
39+
|Semi-supervised segmentation|[seg_semi_sup/ACDC][ssl_acdc_link]|Semi-supervised methods for heart structure segmentation using 2D CNNs|
40+
| |[seg_semi_sup/AtriaSeg][ssl_atrial_link]|Semi-supervised methods for left atrial segmentation using 3D CNNs|
41+
|Weakly-supervised segmentation|[seg_weak_sup/ACDC][wsl_acdc_link]|Segmentation of heart structure with scrible annotations|
42+
|Noisy label learning|[seg_noisy_label/JSRT][nll_jsrt_link]|Comparing different NLL methods for learning from noisy labels|
43+
|Self-Supervised learning|[seg_self_sup/lung][self_lung_link]|Self-Supervised learning methods for pretraining a segmentation model|
3644

3745
[PyMIC_link]: https://github.com/HiLab-git/PyMIC
3846
[google_link]:https://drive.google.com/file/d/1eZakSEBr_zfIHFTAc96OFJix8cUBf-KR/view?usp=sharing
3947
[baidu_link]:https://pan.baidu.com/s/1tN0inIrVYtSxTVRfErD9Bw
4048
[AntBee_link]:classification/AntBee
4149
[CHNCXR_link]:classification/CHNCXR
42-
[JSRT_link]:segmentation/JSRT
43-
[JSRT2_link]:segmentation/JSRT2
44-
[fetal_hc_link]:segmentation/fetal_hc
45-
[prostate_link]:segmentation/prostate
46-
[ssl_acdc_link]:seg_ssl/ACDC
47-
[ssl_atrial_link]:seg_ssl/AtriaSeg/
48-
[wsl_acdc_link]:seg_wsl/ACDC
49-
[nll_jsrt_link]:seg_nll/JSRT
50+
[JSRT_link]:seg_full_sup/JSRT
51+
[JSRT2_link]:seg_full_sup/JSRT2
52+
[fetal_hc_link]:seg_full_sup/fetal_hc
53+
[prostate_link]:seg_full_sup/prostate
54+
[ssl_acdc_link]:seg_semi_sup/ACDC
55+
[ssl_atrial_link]:seg_semi_sup/AtriaSeg/
56+
[wsl_acdc_link]:seg_weak_sup/ACDC
57+
[nll_jsrt_link]:seg_noisy_label/JSRT
58+
[self_lung_link]:seg_self_sup/lung
5059

5160
## Useful links
5261
* PyMIC on Github: https://github.com/HiLab-git/PyMIC
53-
* Usage of PyMIC: https://pymic.readthedocs.io/en/latest/usage.html
62+
* Usage of PyMIC: https://pymic.readthedocs.io/en/latest/usage.html
63+
64+
## Citation
65+
* G. Wang, X. Luo, R. Gu, S. Yang, Y. Qu, S. Zhai, Q. Zhao, K. Li, S. Zhang. (2023).
66+
[PyMIC: A deep learning toolkit for annotation-efficient medical image segmentation.][arxiv2022] Computer Methods and Programs in Biomedicine (CMPB). February 2023, 107398.
67+
68+
[arxiv2022]:http://arxiv.org/abs/2208.09350
69+
70+
BibTeX entry:
71+
72+
@article{Wang2022pymic,
73+
author = {Guotai Wang and Xiangde Luo and Ran Gu and Shuojue Yang and Yijie Qu and Shuwei Zhai and Qianfei Zhao and Kang Li and Shaoting Zhang},
74+
title = {{PyMIC: A deep learning toolkit for annotation-efficient medical image segmentation}},
75+
year = {2023},
76+
url = {https://doi.org/10.1016/j.cmpb.2023.107398},
77+
journal = {Computer Methods and Programs in Biomedicine},
78+
volume = {231},
79+
pages = {107398},
80+
}

classification/AntBee/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ pymic_test config/train_test_ce1.cfg
4545
2. Then run the following command to obtain quantitative evaluation results in terms of accuracy.
4646

4747
```bash
48-
pymic_eval_cls config/evaluation.cfg
48+
pymic_eval_cls -cfg config/evaluation.cfg
4949
```
5050

5151
The obtained accuracy by default setting should be around 0.9477, and the AUC will be around 0.9745.

classification/AntBee/bash.sh

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
# PyMICPATH=/home/disk4t/projects/PyMIC_project/PyMIC
2+
PyMICPATH=/home/disk4t/projects/PyMIC_project/Pypi_test/PyMIC-dev
3+
export PYTHONPATH=$PYTHONPATH:$PyMICPATH
4+
5+
#python $PyMICPATH/pymic/net_run/train.py config/train_test_ce1.cfg \
6+
#-ckpt_dir model_test/resnet18_ce1 -iter_max 500
7+
# python $PyMICPATH/pymic/net_run/predict.py config/train_test_ce2.cfg
8+
python $PyMICPATH/pymic/util/evaluation_cls.py -cfg config/evaluation.cfg
9+
10+
#pymic_train config/train_test_ce1.cfg \
11+
#-ckpt_dir model_test/resnet18_ce1 -iter_max 500
12+
#pymic_test config/train_test_ce1.cfg
13+
#pymic_eval_cls -cfg config/evaluation.cfg
Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
[evaluation]
2-
metric_list = [accuracy, auc]
3-
ground_truth_csv = config/valid_data.csv
4-
predict _csv = result/resnet18_ce1.csv
5-
predict_prob_csv = result/resnet18_ce1_prob.csv
2+
metric = [accuracy, auc]
3+
gt_csv = config/valid_data.csv
4+
pred_prob_csv = result/resnet18_ce1_prob.csv

classification/AntBee/config/train_test_ce1.cfg

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -61,11 +61,10 @@ lr_scheduler = StepLR
6161
lr_gamma = 0.5
6262
lr_step = 500
6363

64-
ckpt_save_dir = model/resnet18_ce1
65-
ckpt_prefix = resnet18
64+
ckpt_dir = model/resnet18_ce1
65+
ckpt_prefix = resnet18
6666

6767
# iteration
68-
iter_start = 0
6968
iter_max = 2000
7069
iter_valid = 100
7170
iter_save = 2000

classification/AntBee/config/train_test_ce2.cfg

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,11 +61,10 @@ lr_scheduler = StepLR
6161
lr_gamma = 0.5
6262
lr_step = 500
6363

64-
ckpt_save_dir = model/resnet18_ce2
64+
ckpt_dir = model/resnet18_ce2
6565
ckpt_prefix = resnet18
6666

6767
# iteration
68-
iter_start = 0
6968
iter_max = 2000
7069
iter_valid = 100
7170
iter_save = 2000

classification/CHNCXR/README.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
![normal_example](./picture/CHNCXR_0053_0.png)
44
![tuberc_example](./picture/CHNCXR_0327_1.png)
55

6-
In this example, we finetune a pretrained resnet18 and vgg16 for classification of X-Ray images with two categries: normal and tuberculosis.
6+
In this example, we finetune pretrained resnet18, vgg16 and vitb16 for classification of X-Ray images with two categries: normal and tuberculosis.
77

88
## Data and preprocessing
99
1. We use the Shenzhen Hospital X-ray Set for this experiment. This [dataset] contains images in JPEG format. There are 326 normal x-rays and 336 abnormal x-rays showing various manifestations of tuberculosis. The images are available in `PyMIC_data/CHNCXR`.
@@ -45,7 +45,7 @@ pymic_test config/net_resnet18.cfg
4545
2. Then run the following command to obtain quantitative evaluation results in terms of accuracy.
4646

4747
```bash
48-
pymic_eval_cls config/evaluation.cfg
48+
pymic_eval_cls -cfg config/evaluation.cfg
4949
```
5050

5151
The obtained accuracy by default setting should be around 0.8271, and the AUC is 0.9343.
@@ -55,5 +55,9 @@ The obtained accuracy by default setting should be around 0.8271, and the AUC is
5555
![roc](./picture/roc.png)
5656

5757

58-
## Finetuning vgg16
59-
Similarly to the above example, we further try to finetune vgg16 for the same classification task. Use a different configure file `config/net_vg16.cfg` for training and testing. Edit `config/evaluation.cfg` accordinly for evaluation. The accuracy and AUC would be around 0.8571 and 0.9271, respectively.
58+
## Finetuning VGG16
59+
Similarly to the above example, we further try to finetune vgg16 for the same classification task. Use a different configure file `config/net_vgg16.cfg` for training and testing. Edit `config/evaluation.cfg` accordinly for evaluation. The accuracy and AUC would be around 0.8571 and 0.9271, respectively.
60+
61+
## Finetuning ViTB16
62+
Just follow the above steps with the configuration file `config/net_vitb16.cfg` for training and testing.
63+

classification/CHNCXR/bash.sh

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
# PyMICPATH=/home/disk4t/projects/PyMIC_project/PyMIC
2+
PyMICPATH=/home/disk4t/projects/PyMIC_project/Pypi_test/PyMIC-dev
3+
export PYTHONPATH=$PYTHONPATH:$PyMICPATH
4+
5+
# python $PyMICPATH/pymic/net_run/train.py config/net_vitb16.cfg
6+
# python $PyMICPATH/pymic/net_run/predict.py config/net_vitb16.cfg
7+
# python $PyMICPATH/pymic/net_run/predict.py config/net_resnet18.cfg
8+
# python $PyMICPATH/pymic/net_run/predict.py config/net_vgg16.cfg
9+
#python ../../../PyMIC/pymic/net_run/train.py test config/net_resnet18.cfg
10+
#python ../../../PyMIC/pymic/net_run/net_run.py train config/net_vgg16.cfg
11+
#python ../../../PyMIC/pymic/net_run/net_run.py test config/net_vgg16.cfg
12+
python $PyMICPATH/pymic/util/evaluation_cls.py -cfg config/evaluation.cfg
13+
14+
# python $PyMICPATH/pymic/net_run/train.py config/net_vitb16.cfg -ckpt_dir model_test/vitb16 -iter_max 200
15+
#python $PyMICPATH/pymic/net_run/train.py config/net_resnet18.cfg \
16+
#-ckpt_dir model_test/resnet18 -iter_max 200
17+
#python $PyMICPATH/pymic/net_run/train.py config/net_vgg16.cfg \
18+
#-ckpt_dir model_test/vgg16 -iter_max 200
19+
# python $PyMICPATH/pymic/net_run/predict.py config/net_vitb16.cfg
20+
21+
#pymic_train config/net_vitb16.cfg \
22+
#-ckpt_dir model_test/vitb16 -iter_max 200
23+
#pymic_test config/net_vgg16.cfg
24+
#pymic_eval_cls -cfg config/evaluation.cfg
25+
Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
[evaluation]
2-
metric_list = [accuracy, auc]
3-
ground_truth_csv = config/cxr_test.csv
4-
predict_csv = result/resnet18.csv
5-
predict_prob_csv = result/resnet18_prob.csv
2+
metric = [accuracy, auc]
3+
gt_csv = config/cxr_test.csv
4+
pred_prob_csv = result/resnet18_prob.csv

0 commit comments

Comments
 (0)