Thanks to visit codestin.com
Credit goes to github.com

Skip to content
/ MoME_ Public
forked from konyul/MoME

[CVPR 2025] Resilient Sensor Fusion under Adverse Sensor Failures via Multi-Modal Expert Fusion

License

Notifications You must be signed in to change notification settings

rasd3/MoME_

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[CVPR2025] MoME

Resilient Sensor Fusion under Adverse Sensor Failures via Multi-Modal Expert Fusion

Konyul Park1 *, Yecheol Kim2,3 *, Daehun Kim2, Jun Won Choi1 **

1 Seoul National University, Korea 2 Hanyang University, Korea, 3 LG AI Research, Korea

(*) equal contribution, (**) corresponding author.

ArXiv Preprint (arXiv 2407.13517)

overall

Introduction

In this study, we introduce an efficient and robust LiDAR-camera 3D object detector, referred to as MoME, which can achieve robust performance through a mixture of experts approach. Our MoME fully decouples modality dependencies using three parallel expert decoders, which use camera features, LiDAR features, or a combination of both to decode object queries, respectively. We propose Multi-Expert Decoding (MED) framework, where each query is decoded selectively using one of three expert decoders. MoME utilizes an Adaptive Query Router (AQR) to select the most appropriate expert decoder for each query based on the quality of camera and LiDAR features. This ensures that each query is processed by the best-suited expert, resulting in robust performance across diverse sensor failure scenarios. We evaluated the performance of MoME on the nuScenes-R benchmark. Our MoME achieved state-of-the-art performance in extreme weather and sensor failure conditions, significantly outperforming the existing models across various sensor failure scenarios.

Qualitative results (NDS) on nuScenes and nuScenes-R dataset

Method Training Schedule Clean Beam Reduction LiDAR Drop Limited FOV Object Failure View Drop Occlusion config weight
4 beams all ±60 0.5 all
MoME 2 Epochs 73.6 63.0 48.2 58.3 71.0 69.5 70.5 config weight

Notes

We Evaluate MoME on nuScenes-R and nuScenes-C

Getting Started

Acknowledgements

MoME is based on MEFormer. It is also greatly inspired by the following outstanding contributions to the open-source community: mmdetection3d, CMT.

Citation

If you find MoME is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@article{MoME,
  title={Resilient Sensor Fusion under Adverse Sensor Failures via Multi-Modal Expert Fusion},
  author={Park, Konyul and Kim, Yecheol and Kim, Daehun and Choi, Jun Won},
  journal={arXiv preprint arXiv:2503.19776},
  year={2025}
}

About

[CVPR 2025] Resilient Sensor Fusion under Adverse Sensor Failures via Multi-Modal Expert Fusion

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.7%
  • Shell 0.3%