Thanks to visit codestin.com
Credit goes to Github.com

Skip to content

WZH0120/SAM3-UNet

Repository files navigation

Introduction

frameworkIn this paper, we introduce SAM3-UNet, a simplified variant of Segment Anything Model 3 (SAM3), designed to adapt SAM3 for downstream tasks at a low cost. Our SAM3-UNet consists of three components: a SAM3 image encoder, a simple adapter for parameter-efficient fine-tuning, and a lightweight U-Net-style decoder. Preliminary experiments on multiple tasks, such as mirror detection and salient object detection, demonstrate that the proposed SAM3-UNet outperforms the prior SAM2-UNet and other state-of-the-art methods, while requiring less than 6 GB of GPU memory during training with a batch size of 12.

微信交流群

Clone Repository

git clone https://github.com/WZH0120/SAM3-UNet.git
cd SAM3-UNet/

Prepare Datasets

You can refer to the following repositories and their papers for the detailed configurations of the corresponding datasets.

  • Salient Object Detection. Please refer to SALOD.
  • Mirror Detection. Please refer to HetNet.

Requirements

Please refer to SAM 3.

Training

If you want to train your own model, please download the pre-trained sam3.pt according to official guidelines. After the above preparations, you can run train.sh to start your training.

Testing

Our pre-trained models and prediction maps can be found at Google Drive. Also, you can run test.sh to obtain your own predictions.

Evaluation

After obtaining the prediction maps, you can run eval.sh to get the quantitative results. For the evaluation of mirror detection, please refer to eval.py in HetNet to obtain the results.

Citation and Star

Please cite the following paper and star this project if you use this repository in your research. Thank you!

Acknowledgement

SAM 3SAM2-UNet

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages