✉ Corresponding Author
- 20 June, 2025: 💥💥 Our paper "SSP-SAM: SAM with Semantic-Spatial Prompt for Referring Referring Segmentation" has been submitted to IEEE Transactions on Circuits and Systems for Video Technology (TCSVT).
This repository contains the official implementation and checkpoints of the following paper:
SSP-SAM: SAM with Semantic-Spatial Prompt for Referring Image Segmentation
Abstract: The Segment Anything Model (SAM) excels at general image segmentation but has limited ability to understand natural language, which restricts its direct application in Referring Expression Segmentation (RES). Toward this end, we propose SSP-SAM, a framework that fully utilizes SAM’s segmentation capabilities by integrating a Semantic-Spatial Prompt (SSP) encoder. Specifically, we incorporate both visual and linguistic attention adapters into the SSP encoder, which highlight salient objects within the visual features and discriminative phrases within the linguistic features. This design enhances the referent representation for the prompt generator, resulting in high-quality SSPs that enable SAM to generate precise masks guided by language. Although not specifically designed for Generalized RES (GRES), where the referent may correspond to zero, one, or multiple objects, SSP-SAM naturally supports this more flexible setting without additional modifications. Extensive experiments on widely used RES and GRES benchmarks confirm the superiority of our method. Notably, our approach generates segmentation masks of high quality, achieving strong precision even at strict thresholds such as [email protected]. Further evaluation on the PhraseCut dataset demonstrates improved performance in open-vocabulary scenarios compared to existing state-of-the-art RES methods. The code and models will be available at https://github.com/WayneTomas/SSP-SAM once the manuscript is accepted.