Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Fine-tune SAM(Segment Anything Model) for class-aware computer vision tasks in specific scenarios

License

Notifications You must be signed in to change notification settings

guanshanjushi/finetune-anything

 
 

Repository files navigation

finetune-anything

Introduction

The Segment Anything Model (SAM) has revolutionized computer vision. Relying on fine-tuning of SAM will solve a large number of basic computer vision tasks. We are designing a class-aware one-stage tool for training fine-tuning models based on SAM.

You need to supply the datasets for your tasks and the supported task name, this tool will help you to get a finetuned model for your task.

Design

Finetune-Anything further encapsulates the three parts of the original SAM. For example, MaskDecoder is encapsulated as MaskDecoderAdapter. Users can customize the structure of Extend SAM in MaskDecoderAdapter. The current MaskDecoderAdatper contains two parts, DecoderNeck and DecoderHead

Supported Tasks

  • Semantic Segmentation
    • train
    • eval
    • test
  • Matting
  • Instance Segmentation
  • Detection

Supported Datasets

  • TorchVOCSegmentation
  • BaseSemantic
  • BaseInstance
  • BaseMatting

Install

Step1

git clone https://github.com/ziqi-jin/finetune-anything.git
cd finetune-anything
pip install -r requirements.txt

Step2

Download the SAM weights from SAM repository

Step3

Modify the contents of yaml file for the specific task in /config, e.g., ckpt_path, model_type ...

Train

CUDA_VISIBLE_DEVICES=${your GPU number} python train.py --task_name semantic_seg

Test

Deploy

  • Onnx export

About

Fine-tune SAM(Segment Anything Model) for class-aware computer vision tasks in specific scenarios

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%