The Segment Anything Model (SAM) has revolutionized computer vision. Relying on fine-tuning of SAM will solve a large number of basic computer vision tasks. We are designing a class-aware one-stage tool for training fine-tuning models based on SAM.
You need to supply the datasets for your tasks and the supported task name, this tool will help you to get a finetuned model for your task.
Finetune-Anything further encapsulates the three parts of the original SAM. For example, MaskDecoder is encapsulated as MaskDecoderAdapter. Users can customize the structure of Extend SAM in MaskDecoderAdapter. The current MaskDecoderAdatper contains two parts, DecoderNeck and DecoderHead
- Semantic Segmentation
- train
- eval
- test
- Matting
- Instance Segmentation
- Detection
- TorchVOCSegmentation
- BaseSemantic
- BaseInstance
- BaseMatting
git clone https://github.com/ziqi-jin/finetune-anything.git
cd finetune-anything
pip install -r requirements.txt
Download the SAM weights from SAM repository
Modify the contents of yaml file for the specific task in /config, e.g., ckpt_path, model_type ...
CUDA_VISIBLE_DEVICES=${your GPU number} python train.py --task_name semantic_seg
- Onnx export