This repository contains the code for a Vehicle Motion Forecasting Competition in Spring 2025 for UCSD's 251B: Deep Learning course.
- Public Score Ranking: 13th out of 68 teams
- Private Score Ranking: 14th out of 68 teams
The competition involved forecasting the future positions of agents based on their past positions and velocities. The dataset is based on the Argoverse 2 Motion Forecasting Dataset. Download the train and test data from here: https://drive.google.com/drive/folders/1HvjwglXTPKEtWlpFFtbiPctbRfHmcKNp
The training set is of shape (10000, 50, 110, 6). This means That there are 10,000 training scenes, each scene contains 50 agents' trajectories over 110 time steps, and each time step has 6 dimensions. The 6 dimensions are:
- position_x
- position_y
- velocity_x
- velocity_y
- heading
- object_type
The test set input has shape (2100, 50, 50, 6).
The task is to predict the ego vehicle (agent index 0)'s trajectory. Given the first 50 time steps (5 seconds) to predict the next 60 time steps (6 seconds).
Here is an example of 1 scene:
The types of models we trained were Linear Regression, Convolutional Neural Networks, and LSTMs. Please see the model architectures in /models directory and the experiments in the /experiments directory.
The LSTM + Attention model performed the best. The validation loss is calculated using the last 5% of the train data. Shown below are the plotted validation loss curves for the models we tried. Some models have less epochs because they stopped training earlier due to early stopping conditions.