- Support different backbones
- Support darwinlugs, JSTR, MC, SH, NIH datasets
- Multi-GPU training
| Backbone | train/eval os | mIoU in val | Pretrained Model |
|---|---|---|---|
| ResNet101 | 16/16 | 78.43% | google drive |
| MobileNet | 16/16 | 70.81% | google drive |
| DRN | 16/16 | 78.87% | google drive |
This is a PyTorch(1.11.0) implementation of DeepLab-V3-Plus. It can use Modified Aligned Xception and ResNet as backbone. Currently, we train DeepLab V3 Plus using Darwinlugs, JSTR, MC, SH, NIH.
The code was tested with Python 3.9.7 After installing the Virtual environment:
-
Clone the repo:
https://github.com/UkeshThapa/Lung-Segmentation-Using-DeeplabV3.git cd Lung-Segmentation-Using-DeeplabV3 -
Install dependencies:
For PyTorch dependency, see pytorch.org for more details.
For custom dependencies:
pip install matplotlib pillow tensorboardX tqdm
Follow steps below to train your model:
-
Configure your dataset path in [mypath.py].
-
Input arguments: (see full input arguments via python train.py --help):
usage: train.py [-h] [--backbone {resnet,xception,drn,mobilenet}] [--out-stride OUT_STRIDE] [--dataset {pascal,coco,cityscapes,darwinlungs}] [--use-sbd] [--workers N] [--base-size BASE_SIZE] [--crop-size CROP_SIZE] [--sync-bn SYNC_BN] [--freeze-bn FREEZE_BN] [--loss-type {ce,focal}] [--epochs N] [--start_epoch N] [--batch-size N] [--test-batch-size N] [--use-balanced-weights] [--lr LR] [--lr-scheduler {poly,step,cos}] [--momentum M] [--weight-decay M] [--nesterov] [--no-cuda] [--gpu-ids GPU_IDS] [--seed S] [--resume RESUME] [--checkname CHECKNAME] [--ft] [--eval-interval EVAL_INTERVAL] [--no-val] -
To train deeplabv3+ using Pascal VOC dataset and ResNet-101 as backbone:
bash train_voc.sh
-
To train DeeplabV3+
python train.py --backbone resnet --dataset darwinlungs --batch-size 4
-
To train fcn
python fcn_train.py --backbone resnet --dataset darwinlungs --batch-size 4
- To test fcn
python fcn_inference.py
- To test DeeplabV3+
python inference.py