🚗💥 From Words to Collisions: LLM-Guided Evaluation and Adversarial Generation of Safety-Critical Driving Scenarios
We propose a novel framework that leverages Large Language Models (LLMs) for:
- 🧠 Evaluation: Assessing safety-criticality of driving scenarios with use cases: Scenario evaluation and Safety inference.
- 🛠️ Generation: Adversarially generating safety-critical scenarios with controllable agent trajectories.
|
|
|
- [July 2025] Paper accepted at IEEE ITSC 2025
- [May 2025] Project repository initialized
agent/: Analyze agent-based scenariosagent_normal/: Analyze normal agent behaviorsscenario/: Analyze collision scenariosscenario_normal/: Analyze normal driving scenarios
BEL_Antwerp-1_14_T-1/: Original normal scenariosBEL_Antwerp-1_14_T-1n/: Generated adversarial scenariosMetrices/: Safety metric for these two scenarios
Trajectory_collection/: Collect vehicle trajectoriesRiskscore_calculation/: Compute risk scoresSafety_metrics_collection/: Extract safety metricsCloesdID_identification/: Identify nearby agentsgenerate_timestep_report.py: Generate reports for each timestep
normal_scenarios/: 100 normal scenarios (Frenetix planner)collision_scenarios/: 100 collision scenarios (Frenetix planner)
LLM/: Results from LLM evaluationsoutput_validation/: Validation for collision scenariosoutput_validation_normal/: Validation for normal scenarios
- Python 3.8+
- CommonRoad
- Frenetix Motion Planner
Create a .env file in the root directory with your API keys:
OPENAI_API_KEY=your_openai_key
GEMINI_API_KEY=your_gemini_key
DEEPSEEK_API_KEY=your_deepseek_key
If you find this work helpful in your research, please consider citing us:
@article{gao2025words, title={From Words to Collisions: LLM-Guided Evaluation and Adversarial Generation of Safety-Critical Driving Scenarios}, author={Gao, Yuan and Piccinini, Mattia and Moller, Korbinian and Alanwar, Amr and Betz, Johannes}, journal={arXiv preprint arXiv:2502.02145} [Add to Citavi project by ArXiv ID] , year={2025} }