VHRV (Very High Resolution Vessels) Dataset
Welcome to the official repository for VHRV dataset, associated with our research article "VHRV: Very High-Resolution Benchmark Dataset for Vessel Detection" published in the Remote Sensing Applications: Society and Environment (RSASE) journal. To access our paper ---> VHRV Paper
VHRV dataset, namely Very High-Resolution Vessels, constitutes a formative contribution to the field of computer vision, specifically addressing vessel detection practices from remote sensing imagery. This dataset has been thoughtfully built to meet the requirements of advancing research and development in object detection algorithms, particularly in the maritime context. Its purpose is to provide a versatile alternative, that contains consistent and rich content by adding more vessel types with having different scales, for deep learning models to detect opulent of ships under single vessel class in high-resolution remote sensing images.
- Number of Total Images: 1,502
- Number of Total Vessel Instances: 10,158
- Spatial Resolution: Ranges from 0.1 m to 0.25m
- Image Resolution: 4800x2886 pixels
- Annotation Format: YOLO
- Annotation Style: HBB (Horizontal Bounding Box)
Use of the Google Earth images must respect the "Google Earth" terms of use. All images and their associated annotations in VHRV dataset can be used for academic purposes only, but any commercial use is prohibited.
VHRV dataset can be downloaded here ---> Download VHRV .
To evaluate the effectiveness of the VHRV dataset, we conducted comprehensive experiments using both two-stage (R-CNN-based) and one-stage (YOLO-based) deep learning models. The results, as presented in our RSASE journal article, demonstrate the robustness of the dataset across various architectures. For reproducibility and further exploration, we provide the trained model weights used in these experiments.
| Model | Backbone Type/Depth |
size (pixels) |
mAPtest 0.50 |
mAPtest 0.50:0.95 |
mAPval 0.50 |
mAPval 0.50:0.95 |
|---|---|---|---|---|---|---|
| Faster R-CNN | Restnet 50 | 1333x800 | 0.921 | 0.631 | 0.924 | 0.653 |
| Faster R-CNN | Restnet 101 | 1333x800 | 0.933 | 0.631 | 0.925 | 0.648 |
| Libra R-CNN | Restnet 50 | 1333x800 | 0.928 | 0.643 | 0.919 | 0.659 |
| Libra R-CNN | Restnet 101 | 1333x800 | 0.929 | 0.634 | 0.930 | 0.661 |
| Cascade R-CNN | Restnet 50 | 1333x800 | 0.931 | 0.668 | 0.926 | 0.683 |
| Cascade R-CNN | Restnet 101 | 1333x800 | 0.925 | 0.657 | 0.925 | 0.677 |
R-CNN based algorithms have been implemented and evaluated in a unified code library MMDetection.
| Model | size (pixels) |
mAPtest 0.50 |
mAPtest 0.50 |
mAPval 0.50 |
mAPval 0.50:0.95 |
params (M) |
|---|---|---|---|---|---|---|
| YOLOv5x | 1024 | 0.985 | 0.835 | 0.971 | 0.848 | 56.9 |
| YOLOv6l | 1024 | 0.982 | 0.812 | 0.975 | 0.823 | 56.9 |
| YOLOv7x | 1024 | 0.988 | 0.832 | 0.979 | 0.846 | 56.9 |
| YOLOv8x | 1024 | 0.978 | 0.828 | 0.975 | 0.844 | 56.9 |
| YOLOv9c | 1024 | 0.981 | 0.845 | 0.973 | 0.856 | 56.9 |
| YOLOv10x | 1024 | 0.978 | 0.817 | 0.967 | 0.824 | 56.9 |
| YOLO11x | 1024 | 0.981 | 0.835 | 0.972 | 0.852 | 56.9 |
| YOLOv12x | 1024 | 0.984 | 0.844 | 0.974 | 0.854 | 56.9 |
YOLO models were executed using their original source code libraries, with the exception of YOLOv12, which was implemented with Ultralytics adaptation.
If you make use of the VHRV dataset, please cite our following paper: https://doi.org/10.1016/j.rsase.2025.101641
We make this dataset available for academical purposes only. You may not use or distribute this dataset for commercial purposes.
@article{BUYUKKANBER2025101641,
title = {VHRV: Very High-Resolution Benchmark Dataset for Vessel Detection},
journal = {Remote Sensing Applications: Society and Environment},
pages = {101641},
year = {2025},
issn = {2352-9385},
doi = {https://doi.org/10.1016/j.rsase.2025.101641},
url = {https://www.sciencedirect.com/science/article/pii/S2352938525001946},
author = {Furkan Büyükkanber and Mustafa Yanalak and Nebiye Musaoğlu},
keywords = {Vessel detection, Ship dataset, Remote sensing images, Deep learning, Convolutional neural networks}
}For further information or any question, you can use the issues (https://github.com/buyukkanber/vhrv/issues) tab