Scene graph (SG) representations can neatly and efficiently describe scene semantics, which has driven sustained intensive research in SG generation. In the real world, multiple modalities often coexist, with different types, such as images, text, video, and 3D data, expressing distinct characteristics. Unfortunately, current SG research is largely confined to single-modality scene modeling, preventing the full utilization of the complementary strengths of different modality SG representations in depicting holistic scene semantics. To this end, we introduce Universal SG (USG), a novel representation capable of fully characterizing comprehensive semantic scenes from any given combination of modality inputs, encompassing modality-invariant and modality-specific scenes. Further, we tailor a niche-targeting USG parser, USG-Par, which effectively addresses two key bottlenecks of cross-modal object alignment and out-of-domain challenges. We design the USG-Par with modular architecture for end-to-end USG generation, in which we devise an object associator to relieve the modality gap for cross-modal object alignment. Further, we propose a text-centric scene contrasting learning mechanism to mitigate domain imbalances by aligning multimodal objects and relations with textual SGs. Through extensive experiments, we demonstrate that USG offers a stronger capability for expressing scene semantics than standalone SGs, and also that our USG-Par achieves higher efficacy and performance.
Our model consists of five main modules. First, we extract the modality-specific features with a modality-specific backbone. Second, we employ a shared mask decoder to extract object queries for various modalities. These object queries are then fed into the modality-specific object detection head to obtain the category label and tracked positions of the corresponding objects. Third, the object queries are input into the object associator, which determines the association relationships between objects across modalities. Fourth, a relation proposal constructor is utilized to retrieve the most confidential subject-object pairs. Finally, a relation decoder is employed to decode the final predicate prediction between the subjects and objects.
To evaluate the efficacy of USG-Par, which supports both single-modality and multi-modality scene parsing, we utilize existing single-modality datasets and a manually constructed multimodal dataset.
-
Single-modal Dataset
- Image:
- Video:
- Text:
- 3DSG
please refer to the corresponding instructions for dataset preparation.
-
Multi-modal Dataset
-
Text-Image
Inspired by LLM4SGG, we leverage the three image caption datasets: COCO caption, Conceptual (CC) caption , and VG caption to build the Text-Image pair-wise SG.
-
Text-Video
To construct the text-video pairwise USG dataset, we select 400 videos from ActivityNet, which includes dense caption annotations.
-
Text-3D
To construct the text-3D pairwise USG dataset, we use the ScanRefer dataset, which contains 46,173 descriptions of 724 object types across 800 ScanNet scenes.
-
Image-Video
To construct the image-video pairwise USG dataset, we utilize the existing PVSG video dataset.
-
Image-3D To construct the Image-3D USG dataset, we leverage the existing 3DSG dataset.
-
Coming soon.
If you use USG in your project, please kindly cite:
@inproceedings{wu2025usg,
title={Universal Scene Graph Generation},
author={Wu, Shengqiong and Fei, Hao and Chua, Tat-Seng},
booktitle={CVPR},
year={2025}
}
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.