Thanks to visit codestin.com
Credit goes to github.com

Skip to content

lynn-yu/NRSeg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NRSeg

NRSeg: Noise-Resilient Learning for BEV Semantic Segmentation via Driving World Models [PDF]

Siyu Li*, Fei Teng*, Yihong Cao, Kailun Yang†, Zhiyong Li†, Yaonan Wang.

Motivation

Framework

Abstract

Birds' Eye View (BEV) semantic segmentation is an indispensable perception task in end-to-end autonomous driving systems. Unsupervised and semi-supervised learning for BEV tasks, as pivotal for real-world applications, underperform due to the homogeneous distribution of the labeled data. In this work, we explore the potential of synthetic data from driving world models to enhance the diversity of labeled data for robustifying BEV segmentation. Yet, our preliminary findings reveal that generation noise in synthetic data compromises efficient BEV model learning. To fully harness the potential of synthetic data from world models, this paper proposes NRSeg, a noise-resilient learning framework for BEV semantic segmentation. Specifically, a Perspective-Geometry Consistency Metric (PGCM) is proposed to quantitatively evaluate the guidance capability of generated data for model learning. This metric originates from the alignment measure between the perspective road mask of generated data and the mask projected from the BEV labels. Moreover, a Bi-Distribution Parallel Prediction (BiDPP) is designed to enhance the inherent robustness of the model, where the learning process is constrained through parallel prediction of multinomial and Dirichlet distributions. The former efficiently predicts semantic probabilities, whereas the latter adopts evidential deep learning to realize uncertainty quantification. Furthermore, a Hierarchical Local Semantic Exclusion (HLSE) module is designed to address the non-mutual exclusivity inherent in BEV semantic segmentation tasks. The proposed framework is evaluated on BEV semantic segmentation using data generated by multiple world models, with comprehensive testing conducted on the public nuScenes dataset under unsupervised and semi-supervised settings. Experimental results demonstrate that NRSeg achieves state-of-the-art performance, yielding the highest improvements in mIoU of 13.8% and 11.4% in unsupervised and semi-supervised BEV segmentation tasks, respectively.

Update

2026.2 Update code. 2025.7 Init repository.

Data

This work not only requires the nuScenes dataset, but also utilizes the synthetic data generated by world models, namely PerLDiff, MagicDrive and BEVControl.

Back-projection GT perspective mask

The generation of projection masks for the nuScenes dataset is divided into two parts. One part is the generation of vehicle masks, and the other part is the generation of road masks.

python get_nusc_road_mask.py
python get_nux_car_mask.py

Perspective Mask of Systhetic Data

Regarding the perspective mask, in this work, the SAN and Mask2Former pre-training models were employed to generate it.

The mask generation process of SAN runs the following code:

python predict_sanmask.py 

The mask generation process of Mask2Former runs the following code:

python mask2former_demo/demo_pre.py

UDA Training

python train_UDA.py

🤝 Publication:

Please consider referencing this paper if you use the code from our work. Thanks a lot :)

@article{li2025nrseg,
  title={NRSeg: Noise-Resilient Learning for BEV Semantic Segmentation via Driving World Models},
  author={Siyu Li and Fei Teng and Yihong Cao and Kailun Yang and Zhiyong Li and Yaonan Wang},
  journal={arXiv preprint arXiv:2507.04002},
  year={2025}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages