Thanks to visit codestin.com
Credit goes to github.com

Skip to content

ChocoWu/USG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Universal Scene Graph Generation

(*Correspondence )

Abstract

Scene graph (SG) representations can neatly and efficiently describe scene semantics, which has driven sustained intensive research in SG generation. In the real world, multiple modalities often coexist, with different types, such as images, text, video, and 3D data, expressing distinct characteristics. Unfortunately, current SG research is largely confined to single-modality scene modeling, preventing the full utilization of the complementary strengths of different modality SG representations in depicting holistic scene semantics. To this end, we introduce Universal SG (USG), a novel representation capable of fully characterizing comprehensive semantic scenes from any given combination of modality inputs, encompassing modality-invariant and modality-specific scenes. Further, we tailor a niche-targeting USG parser, USG-Par, which effectively addresses two key bottlenecks of cross-modal object alignment and out-of-domain challenges. We design the USG-Par with modular architecture for end-to-end USG generation, in which we devise an object associator to relieve the modality gap for cross-modal object alignment. Further, we propose a text-centric scene contrasting learning mechanism to mitigate domain imbalances by aligning multimodal objects and relations with textual SGs. Through extensive experiments, we demonstrate that USG offers a stronger capability for expressing scene semantics than standalone SGs, and also that our USG-Par achieves higher efficacy and performance.

framework

Mehotd

Our model consists of five main modules. First, we extract the modality-specific features with a modality-specific backbone. Second, we employ a shared mask decoder to extract object queries for various modalities. These object queries are then fed into the modality-specific object detection head to obtain the category label and tracked positions of the corresponding objects. Third, the object queries are input into the object associator, which determines the association relationships between objects across modalities. Fourth, a relation proposal constructor is utilized to retrieve the most confidential subject-object pairs. Finally, a relation decoder is employed to decode the final predicate prediction between the subjects and objects.

framework

Data

To evaluate the efficacy of USG-Par, which supports both single-modality and multi-modality scene parsing, we utilize existing single-modality datasets and a manually constructed multimodal dataset.

Code

Coming soon.

Citation

If you use USG in your project, please kindly cite:

@inproceedings{wu2025usg,
    title={Universal Scene Graph Generation},
    author={Wu, Shengqiong and Fei, Hao and Chua, Tat-Seng},
    booktitle={CVPR},
    year={2025}
}

Website License

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

About

This is the project for 'USG'.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published