Thanks to visit codestin.com
Credit goes to github.com

Skip to content

AIxMobility/The-DRIFT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The DRIFT Open Dataset

Overview

DRIFT (Drone-derived Intelligent for Traffic analysis) is an open-source dataset designed to support advanced traffic behavior research using high-resolution drone imagery. It enables accurate vehicle detection, trajectory tracking, and traffic flow analysis across complex urban intersections.

Objectives

  • Provide a large-scale, annotated drone dataset optimized for traffic analysis
  • Support urban mobility research with pre-trained models and analytical tools
  • Enable multi-scale traffic analysis (microscopic, mesoscopic, and macroscopic)

Contributions

  • To detect and track vehicle instances in high resolution using polygon-based oriented bounding boxes (OBB) at an altitude of 250 meters
  • To stabilize video data and real-world orthophoto-mapped trajectories
  • To provide 81,699 annotated vehicle trajectories collected across 2.6 km of urban roadways
  • To offer object detection/tracking model for customization and the built-in tools for lane-change analysis, time-to-collision (TTC), congestion detection, flow-density analysis, and more

Dataset Specifications![ttc_new]

  • Site coverage: 9 interconnected urban intersections in Daejeon, South Korea
    • Target site: From 99 Daehak-ro to 291 Daehak-ro Daejeon in South Korea Image
  • Imagery: 4K drone footage with frame-level annotations
  • Trajectory format: Real-world coordinates with speed, acceleration, and heading
  • Model: YOLOv11m + ByteTrack with polygon-based OBB detection

Structure of the extracted traffic trajectory data

Column Description
track_id Unique identifier assigned to each vehicle throughout its trajectory
frame Frame index in the video sequence (30 fps)
center_x, center_y Horizontal and vertical positions of the vehicle center, respectively
width Width of the detected vehicle
height Height of the detected vehicle
angle Orientation angle of the vehicle in radians
x1, y1 Coordinates of the front-left corner of the vehicle
x2, y2 Coordinates of the front-right corner of the vehicle
x3, y3 Coordinates of the rear-right corner of the vehicle
x4, y4 Coordinates of the rear-left corner of the vehicle
confidence Confidence score of the detection result (range: 0 to 1)
class_id Object class label (0: bus, 1: car, 2: truck)
site Identifier of the observation site
lane Lane index where the vehicle is currently located
preceding_id Identifier of the vehicle directly ahead
following_id Identifier of the vehicle directly behind

Note: Position- and size-related values (e.g., coordinates, width, height) are expressed in pixels.

Demos

  • Frame stabilization
stabilization.mp4
- Result of object detection using YOLOv11m model (approximately 300K vehicle instances manually annotated)
od_small.mp4
- Visualized trajectories in DRIFT open dataset

Download Dataset (Huggingface)

https://huggingface.co/datasets/Hj-Lee/The-DRIFT

Download Dataset (in Python)

# DRIFT dataset load
from datasets import load_dataset

dataset = load_dataset("Hj-Lee/The-DRIFT")

Model Customization

# Clone the repository
git clone https://github.com/AIxMobility/The-DRIFT

# Create conda env
conda create -n DRIFT python=3.11 -y
conda activate DRIFT

# Install dependencies
cd The-DRIFT
pip install -r requirements.txt

# Stabilize drone video
sh preprocessing/stabilization.sh
python preprocessing/stabilization.py

# Preprocess the dataset
sh preprocessing/extraction.sh
python preprocessing/extraction.py

# Train detection model
python model/train.py

Repository Structure

│
├── data/                      # Raw and processed drone data
│   ├── csv/                   # Frame-level trajectory metadata
│   ├── sample_video/          # Sample drone videos
│   ├── site_images/           # Reference frames in each site
│
├── preprocessing/                # Data extraction and stabilization
│   ├── detect_and_track.py
│   ├── json_to_csv.py             
│   ├── lane.py
│   ├── RoI.json
│   ├── extraction.sh
│   ├── extraction.py
│   ├── stabilo.py            # Stabilization scripts (Ack.: https://github.com/rfonod/stabilo)
│   ├── default.yaml
│   ├── stabilze_video.py
│   ├── stabilo_utils.py
│   ├── script_utils.py   
│   ├── stabilization.sh
│   ├── stabilization.py
│   ├── geoalign_roi.json
│   ├── geoalign_transformation.ipynb
|
├── model/                     # Annotation data and model training
│   ├── test/                   
│   ├── train/           
│   ├── valid/
│   ├── data.yaml                   
│   ├── drone_data.yaml           
│   ├── train.py
│   ├── best.pt
|
├── utils/                     # Utility scripts for data handling
│   ├── convert.py
│   ├── video_to_frame.py
│
├── vis/                       # Visualization scripts and tools
│
├── notebooks/
│   ├── data_exploration.ipynb
│   ├── performance_analysis.ipynb
│
├── requirements.txt
├── README.md
└── LICENSE

Visualizations of Traffic Analysis Tools

Scale Description Image
Microscopic Lane Change (LC)
Microscopic Time-to-Collision (TTC)
Mesoscopic Flow-Density Diagram
Mesoscopic Time-Space Diagram
Macroscopic Speed Heatmap

Acknowledgement

This project is based on the Stabilo repository by Robert Fonod, licensed under the MIT License.
Certain parts have been adapted and modified to better suit the needs of this project.

@software{fonod2025stabilo,
  author = {Fonod, Robert},
  license = {MIT},
  month = apr,
  title = {Stabilo: A Comprehensive Python Library for Video and Trajectory Stabilization with User-Defined Masks},
  url = {https://github.com/rfonod/stabilo},
  doi = {10.5281/zenodo.12117092},
  version = {1.0.1},
  year = {2025}
}
@misc{fonod2025advanced,
  title={Advanced computer vision for extracting georeferenced vehicle trajectories from drone imagery}, 
  author={Robert Fonod and Haechan Cho and Hwasoo Yeo and Nikolas Geroliminis},
  year={2025},
  eprint={2411.02136},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2411.02136},
  doi={https://doi.org/10.48550/arXiv.2411.02136}
}

Citation

If you use this project in your academic research, commercial products, or any published material, please acknowledge its use by citing it.

@misc{lee2025driftopendatasetdronederived,
      title={DRIFT open dataset: A drone-derived intelligence for traffic analysis in urban environment}, 
      author={Hyejin Lee and Seokjun Hong and Jeonghoon Song and Haechan Cho and Zhixiong Jin and Byeonghun Kim and Joobin Jin and Jaegyun Im and Byeongjoon Noh and Hwasoo Yeo},
      year={2025},
      eprint={2504.11019},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.11019}, 
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 5