Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[TCSVT underreview] This is the Pytorch code for our paper "SSP-SAM: SAM with Semantic-Spatial Prompt for Referring Expression Segmentation".

License

Notifications You must be signed in to change notification settings

WayneTomas/SSP-SAM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 

Repository files navigation

SSP-SAM: SAM with Semantic-Spatial Prompt for Referring Expression Segmentation

1Nanjing University of Science and Technology; 2NExT++ Lab, School of Computing, National University of Singapore; 3Institute of Computing Technology, Chinese Academy of Sciences;
Corresponding Author

Updates

  • 20 June, 2025: 💥💥 Our paper "SSP-SAM: SAM with Semantic-Spatial Prompt for Referring Referring Segmentation" has been submitted to IEEE Transactions on Circuits and Systems for Video Technology (TCSVT).

This repository contains the official implementation and checkpoints of the following paper:

SSP-SAM: SAM with Semantic-Spatial Prompt for Referring Image Segmentation

Abstract: The Segment Anything Model (SAM) excels at general image segmentation but has limited ability to understand natural language, which restricts its direct application in Referring Expression Segmentation (RES). Toward this end, we propose SSP-SAM, a framework that fully utilizes SAM’s segmentation capabilities by integrating a Semantic-Spatial Prompt (SSP) encoder. Specifically, we incorporate both visual and linguistic attention adapters into the SSP encoder, which highlight salient objects within the visual features and discriminative phrases within the linguistic features. This design enhances the referent representation for the prompt generator, resulting in high-quality SSPs that enable SAM to generate precise masks guided by language. Although not specifically designed for Generalized RES (GRES), where the referent may correspond to zero, one, or multiple objects, SSP-SAM naturally supports this more flexible setting without additional modifications. Extensive experiments on widely used RES and GRES benchmarks confirm the superiority of our method. Notably, our approach generates segmentation masks of high quality, achieving strong precision even at strict thresholds such as [email protected]. Further evaluation on the PhraseCut dataset demonstrates improved performance in open-vocabulary scenarios compared to existing state-of-the-art RES methods. The code and models will be available at https://github.com/WayneTomas/SSP-SAM once the manuscript is accepted.

About

[TCSVT underreview] This is the Pytorch code for our paper "SSP-SAM: SAM with Semantic-Spatial Prompt for Referring Expression Segmentation".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published