Carlos Gómez-Huélamo, Marcos V. Conde
RobeSafe group, University of Alcalá, UAH
Computer Vision Lab, CAIDAS, University of Würzburg
News 🚀🚀
- [10/2022] GNN attention-based method will be released. We will extend the baslines :)
- [09/2022] Ongoing repo update. The most recent work was presented at IEEE ITSC 2022
This is the official repository and PyTorch implementations of different works presented at:
- Fresh Perspectives on the Future of Autonomous Driving Workshop at ICRA 2022
- Multi-Agent Behavior:Representation, Modeling, Measurement, and Applications Workshop at CVPR 2022
- 25th IEEE International Conference on Intelligent Transportation Systems
Our papers:
- Exploring Map-based Features for Efficient Attention-based Vehicle Motion Prediction at CVPRW ICRA 2022 Workshops
- Exploring Attention GAN for Vehicle Motion Prediction at IEEE International Conference on Intelligent Transportation Systems 2022
Motion prediction (MP) of multiple agents is a crucial task in arbitrarily complex environments, from social robots to self-driving cars. Current approaches tackle this problem using end-to-end networks, where the input data is usually a rendered top-view of the scene and the past trajectories of all the agents; leveraging this information is a must to obtain optimal performance. In that sense, a reliable Autonomous Driving (AD) system must produce reasonable predictions on time, however, despite many of these approaches use simple ConvNets and LSTMs, models might not be efficient enough for real-time applications when using both sources of information (map and trajectory history). Moreover, the performance of these models highly depends on the amount of training data, which can be expensive (particularly the annotated HD maps). In this work, we explore how to achieve competitive performance on the Argoverse 1.0 Benchmark using efficient attention-based models, which take as input the past trajectories and map-based features from minimal map information to ensure efficient and reliable MP. These features represent interpretable information as the driveable area and plausible goal points, in opposition to black-box CNN-based methods for map processing.
Tested in Ubuntu 16.04.
If available, check requirements.txt
conda create --name mapfe4mp_env python=3.8 \
conda install -n mapfe4mp ipykernel --update-deps --force-reinstall
python3 -m pip install --upgrade pip \
python3 -m pip install --upgrade Pillow \
pip install \
prodict \
torch \
pyyaml \
torchvision \
tensorboard \
glob2 \
matplotlib \
sklearn \
gitpython \
torchstat \
torch_sparse \
torch_geometric
Download argoverse-api (1.0) in another folder (out of this directory).
Go to the argoverse-api folder:
pip install -e . (N.B. You must have the conda environment activated in order to have argoverse as a Python package of your environment)
Please check our papers for further details.
If you would like to access all the qualitative samples for the Argoverse 1.0 validation set, please contact us.
We are preparing a tutorial notebook for generating these visualizations:
Please cite this work if you use our code or ideas.
Work done with Miguel Ortiz, Santiago Montiel, Rafael Barea, Luis M. Bergasa - RobeSafe group
@article{gomez2022exploring,
title={Exploring Map-based Features for Efficient Attention-based Vehicle Motion Prediction},
author={G{\'o}mez-Hu{\'e}lamo, Carlos and Conde, Marcos V and Ortiz, Miguel},
journal={arXiv preprint arXiv:2205.13071},
year={2022}
}
@article{gomez2022gan,
title={Exploring Attention GAN for Vehicle Motion Prediction},
author={G{\'o}mez-Hu{\'e}lamo, Carlos and Conde, Marcos V. and Ortiz, Miguel and Montiel, Santiago and Barea, Rafael and Bergasa, Luis M.},
journal={arXiv preprint arXiv:2209.12674},
year={2022}
}
Please add in the email subject "mapfe4mp" or "exploring map features paper"
Carlos Gómez-Huélamo. [email protected] Marcos Conde [email protected]