Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[Under Review] MetaDriver: Driver Saliency Prediction in Unseen Accident Scenarios via One-Shot Meta-Learning

zhao-chunyu/MetaDriver

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

logo
Anonymous Author(s)
Affiliation
Address
email
logo

📚Table of Contents

🔥Update

  • 2025/07/16: We provide two handy scripts.

    • modify_yaml.py: One-click to modify dataset path. script
    • collect_metrics.py: One-click to collect metrics data. script
  • 2025/06/20: All the code and models are completed.

  • 2025/04/19: We collect existing meta-learning methods for driver saliency prediction.

    • Environment configuration
      • metadriver_ori: PFENet (TPAMI'22), BAM (CVPR'22). Details
      • metadriver_mmcv: HDMNet (CVPR'23), AMNet (NIPS'23), AENet (ECCV'24). Details
    • How to use: command & script
  • 2025/02/15: We collect driving accident data with data categories, divided into 4 folds by category.

    • DADA: 52 categories. It repartitioned into DADA-52i.
    • PSAD: 4 categories. It repartitioned into PSAD-4i.
  • 2025/01/11: We propose a model in order to learn to learn driver attention in driving accident scenarios.

⚡Proposed Model 🔁

we propose a framework that combines a one-shot learning strategy with scene-level semantic alignment, enabling the model to achieve human-like attentional perception in previously unseen scenarios. (named MetaDriver)

  1. A Novel One-Shot Learning Framework for Driver Saliency Prediction. We propose MetaDriver, inspired by how drivers generalize from limited examples. It filters salient regions from support masks to guide attention learning and uses semantic alignment to improve cross scene generalization.
  2. A Novel One-Shot Learning Framework for Driver Saliency Prediction. To address noisy attention labels and limited semantic overlap, SMF removes irrelevant background while SSA aligns salient features between support and query pairs, significantly boosting our model’s performance.
  3. Extensive Evaluation on DADA-52i and PSAD-4i with Strong Generalization and SOTA Results. It exceeds 10 baselines on both datasets and different backbones, achieving up to +34.3% CC, +45.2% SIM and –31.2% KLD gains under oracle-support. These results highlight its robustness and backbone-agnostic design.

📖Datasets 🔁

Dataset Accident Fold-0 Fold-1 Fold-2 Fold-3
DADA-52i 52 [1] a pedestrian crosses the road. ... [14] there is an object crash. ... [27] a motorbike is out of control. ... [40] vehicle changes lane with the same direction to ego-car. ...
PSAD-4i 4 moving ahead or waiting. lateral. oncoming. turning.

【note】 For all datasets we will provide our download link with the official link. Please choose according to your needs.

DADA-52i: This dataset we will upload in BaiduYun (please wait). Official web in link.

PSAD-4i: This dataset we will upload in BaiduYun (please wait). Official web in link.

DADA-52i PSAD-4i
MetaDADA/
├── 1/
│   ├── 001/ 
│   │   ├── images/   
│   │   └── maps/  
│   ├── 002/
│   └── ...
├── 2/
│   ├── 001/ 
│   │   ├── images/   
│   │   └── maps/  
│   ├── 002/
│   └── ...
└── ...
MetaPSAD/
├── images/
│   ├── 2/ 
│   │   ├── 0004/   
│   │   └── ...  
│   ├── 3/
│   └── ...
└── maps/
    ├── 2/ 
    │   ├── 0004/   
    │   └── .../  
    ├── 3/
    └── ...

🛠️Deployment 🔁

[1] Environment

  • Git clone this repository
git clone https://github.com/zhao-chunyu/MetaDriver.git
cd MetaDriver
  • Create conda environment
conda create -n MetaDriver python=3.11
conda activate MetaDriver
  • Install PyTorch 2.5.1+cu121
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu121
  • Install requirement
pip install -r utils/requirements.txt

[2] Run train

Before training, scripts you might use: { modify_yaml.py: One-click to modify dataset path. Details }

If you wish to train with our model, please use the command below.

sh scripts/train_MetaDriver.sh [datasaet] [split#] [backbone] [gpu]

dataset: metadada, metapsad.

#: 0, 1, 2, 3.

backbone: resnet50, vgg.

gpu: 0 or other.

Example:

sh scripts/train_MetaDriver.sh metadada split0 resnet50 0

[3] Run test

We calculate the predicted values and then use python for the prediction.

sh scripts/test_MetaDriver.sh [datasaet] [split#] [backbone] [gpu]

Example:

sh scripts/test_MetaDriver.sh metadada split0 resnet50 0

After testing, scripts you might use: { collect_metrics.py: One-click to collect metrics data. Details }

[4] Run visualization

If you want to visualize all the data of a certain dataset directly, you can use the following command.

sh scripts/visual_MetaDriver.sh [datasaet] [split#] [backbone] [gpu]

Example:

sh scripts/visual_MetaDriver.sh metadada split0 resnet50 0

[5] Sota One/Few-shot

If you want to run other comparison models, we have supported 5 SOTA models. Details

Supported model
Model PFENet (TPMAI'22) BAM (CVPR'22) HDMNet (CVPR'23) AMNet (NIPS'23) AENet (ECCV'24)

Then, you can run the following command.

sh scripts/[train_*.sh] [dataset] [split#] [backbone] [gpu]

*: PFENet, BAM, HDMNet, AMNet, AENet.

dataset: metadada, metapsad.

#: 0, 1, 2, 3.

backbone: resnet50, vgg.

gpu: 0 or other.

Example:

sh scripts/train_PFENet.sh metadada split0 resnet50 0

Model result testing and model result visualization are similar to our MetaDriver model.

🚀Demo 🔁

  • Compare with SOTA Models
logo logo
  • Case of Unseen Accident Scenario
logo

⭐️Cite 🔁

If you find this repository useful, please use the following BibTeX entry for citation and give us a star⭐.

waiting accepted

About

[Under Review] MetaDriver: Driver Saliency Prediction in Unseen Accident Scenarios via One-Shot Meta-Learning

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published