You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[PyMIC][PyMIC_link] is a PyTorch-based toolkit for medical image computing with annotation-efficient deep learning. Here we provide a set of examples to show how it can be used for image classification and segmentation tasks. For annotation efficient learning, we show examples of Semi-Supervised Learning (SSL), Weakly Supervised Learning (WSL) and Noisy Label Learning (NLL), respectively. For beginners, you can follow the examples by just editting the configuration files for model training, testing and evaluation. For advanced users, you can easily develop your own modules, such as customized networks and loss functions.
2
+
[PyMIC][PyMIC_link] is a PyTorch-based toolkit for medical image computing with annotation-efficient deep learning. Here we provide a set of examples to show how it can be used for image classification and segmentation tasks. For annotation efficient learning, we show examples of Semi-Supervised Learning (SSL), Self-Supervised Learning (Self-SL), Weakly Supervised Learning (WSL) and Noisy Label Learning (NLL), respectively. For beginners, you can follow the examples by just editting the configuration files for model training, testing and evaluation. For advanced users, you can easily develop your own modules, such as customized networks and loss functions.
3
+
4
+
## News
5
+
2024.01 Examples of Self-Supervised Learning have been added.
6
+
7
+
2024.01 More 2D segmentation networks including SwinUNet and TransUNet have been added.
8
+
9
+
2023.12 Semi-Supervised Method MCNet has been added to [seg_semi_sup/ACDC][ssl_acdc_link].
3
10
4
11
## Install PyMIC
5
12
The released version of PyMIC (v0.4.0) is required for these examples, and it can be installed by:
@@ -24,30 +31,50 @@ Currently we provide the following examples in this repository:
24
31
|Catetory|Example|Remarks|
25
32
|---|---|---|
26
33
|Classification|[AntBee][AntBee_link]|Finetuning a resnet18 for Ant and Bee classification|
27
-
|Classification|[CHNCXR][CHNCXR_link]|Finetuning restnet18 and vgg16 for normal/tuberculosis X-ray image classification|
28
-
|Fully supervised segmentation|[JSRT][JSRT_link]|Using a 2D UNet for lung segmentation from chest X-ray images|
29
-
|Fully supervised segmentation|[JSRT2][JSRT2_link]|Using a customized network and loss function for the JSRT dataset|
30
-
|Fully supervised segmentation|[Fetal_HC][fetal_hc_link]|Using a 2D UNet for fetal head segmentation from 2D ultrasound images|
31
-
|Fully supervised segmentation|[Prostate][prostate_link]|Using a 3D UNet for prostate segmentation from 3D MRI|
32
-
|Semi-supervised segmentation|[seg_ssl/ACDC][ssl_acdc_link]|Semi-supervised methods for heart structure segmentation using 2D CNNs|
33
-
|Semi-supervised segmentation|[seg_ssl/AtriaSeg][ssl_atrial_link]|Semi-supervised methods for left atrial segmentation using 3D CNNs|
34
-
|Weakly-supervised segmentation|[seg_wsl/ACDC][wsl_acdc_link]|Segmentation of heart structure with scrible annotations|
35
-
|Noisy label learning|[seg_nll/JSRT][nll_jsrt_link]|Comparing different NLL methods for learning from noisy labels|
34
+
||[CHNCXR][CHNCXR_link]|Finetuning restnet18 and vgg16 for normal/tuberculosis X-ray image classification|
35
+
|Fully supervised segmentation|[JSRT][JSRT_link]|Using five 2D Networks for lung segmentation from chest X-ray images|
36
+
||[JSRT2][JSRT2_link]|Using a customized network and loss function for the JSRT dataset|
37
+
||[Fetal_HC][fetal_hc_link]|Using a 2D UNet for fetal head segmentation from 2D ultrasound images|
38
+
||[Prostate][prostate_link]|Using a 3D UNet for prostate segmentation from 3D MRI|
39
+
|Semi-supervised segmentation|[seg_semi_sup/ACDC][ssl_acdc_link]|Semi-supervised methods for heart structure segmentation using 2D CNNs|
40
+
||[seg_semi_sup/AtriaSeg][ssl_atrial_link]|Semi-supervised methods for left atrial segmentation using 3D CNNs|
41
+
|Weakly-supervised segmentation|[seg_weak_sup/ACDC][wsl_acdc_link]|Segmentation of heart structure with scrible annotations|
42
+
|Noisy label learning|[seg_noisy_label/JSRT][nll_jsrt_link]|Comparing different NLL methods for learning from noisy labels|
43
+
|Self-Supervised learning|[seg_self_sup/lung][self_lung_link]|Self-Supervised learning methods for pretraining a segmentation model|
* PyMIC on Github: https://github.com/HiLab-git/PyMIC
53
-
* Usage of PyMIC: https://pymic.readthedocs.io/en/latest/usage.html
62
+
* Usage of PyMIC: https://pymic.readthedocs.io/en/latest/usage.html
63
+
64
+
## Citation
65
+
* G. Wang, X. Luo, R. Gu, S. Yang, Y. Qu, S. Zhai, Q. Zhao, K. Li, S. Zhang. (2023).
66
+
[PyMIC: A deep learning toolkit for annotation-efficient medical image segmentation.][arxiv2022] Computer Methods and Programs in Biomedicine (CMPB). February 2023, 107398.
67
+
68
+
[arxiv2022]:http://arxiv.org/abs/2208.09350
69
+
70
+
BibTeX entry:
71
+
72
+
@article{Wang2022pymic,
73
+
author = {Guotai Wang and Xiangde Luo and Ran Gu and Shuojue Yang and Yijie Qu and Shuwei Zhai and Qianfei Zhao and Kang Li and Shaoting Zhang},
74
+
title = {{PyMIC: A deep learning toolkit for annotation-efficient medical image segmentation}},
Copy file name to clipboardExpand all lines: classification/CHNCXR/README.md
+8-4Lines changed: 8 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3

4
4

5
5
6
-
In this example, we finetune a pretrained resnet18and vgg16 for classification of X-Ray images with two categries: normal and tuberculosis.
6
+
In this example, we finetune pretrained resnet18, vgg16 and vitb16 for classification of X-Ray images with two categries: normal and tuberculosis.
7
7
8
8
## Data and preprocessing
9
9
1. We use the Shenzhen Hospital X-ray Set for this experiment. This [dataset] contains images in JPEG format. There are 326 normal x-rays and 336 abnormal x-rays showing various manifestations of tuberculosis. The images are available in `PyMIC_data/CHNCXR`.
2. Then run the following command to obtain quantitative evaluation results in terms of accuracy.
46
46
47
47
```bash
48
-
pymic_eval_cls config/evaluation.cfg
48
+
pymic_eval_cls -cfg config/evaluation.cfg
49
49
```
50
50
51
51
The obtained accuracy by default setting should be around 0.8271, and the AUC is 0.9343.
@@ -55,5 +55,9 @@ The obtained accuracy by default setting should be around 0.8271, and the AUC is
55
55

56
56
57
57
58
-
## Finetuning vgg16
59
-
Similarly to the above example, we further try to finetune vgg16 for the same classification task. Use a different configure file `config/net_vg16.cfg` for training and testing. Edit `config/evaluation.cfg` accordinly for evaluation. The accuracy and AUC would be around 0.8571 and 0.9271, respectively.
58
+
## Finetuning VGG16
59
+
Similarly to the above example, we further try to finetune vgg16 for the same classification task. Use a different configure file `config/net_vgg16.cfg` for training and testing. Edit `config/evaluation.cfg` accordinly for evaluation. The accuracy and AUC would be around 0.8571 and 0.9271, respectively.
60
+
61
+
## Finetuning ViTB16
62
+
Just follow the above steps with the configuration file `config/net_vitb16.cfg` for training and testing.
0 commit comments