Thanks to visit codestin.com
Credit goes to github.com

Skip to content
/ COTR Public
forked from NotACracker/COTR

[CVPR24] COTR: Compact Occupancy TRansformer for Vision-based 3D Occupancy Prediction

License

dzp62442/COTR

 
 

Repository files navigation

COTR: Compact Occupancy TRansformer for Vision-based Occupancy Prediction

Qihang Ma1* · Xin Tan1,2* · Yanyun Qu3 · Lizhuang Ma1 · Zhizhong Zhang1+ · Yuan Xie1,2

1Eash China Normal University · 2Chongqing Institute of ECNU · 3Xiamen University

*equal contribution, +corresponding authors

CVPR 2024

Paper PDF

demo

legend

🚀 News

  • 2024.04.01 Code released.
  • 2024.02.27 🌟 COTR is accepted by CVPR 2024.
  • 2023.12.04 arXiv preprint released.

📝 Introduction

The autonomous driving community has shown significant interest in 3D occupancy prediction, driven by its exceptional geometric perception and general object recognition capabilities. To achieve this, current works try to construct a Tri-Perspective View (TPV) or Occupancy (OCC) representation extending from the Bird-Eye-View perception. However, compressed views like TPV representation lose 3D geometry information while raw and sparse OCC representation requires heavy but redundant computational costs. To address the above limitations, we propose Compact Occupancy TRansformer (COTR), with a geometry-aware occupancy encoder and a semantic-aware group decoder to reconstruct a compact 3D OCC representation. The occupancy encoder first generates a compact geometrical OCC feature through efficient explicit-implicit view transformation. Then, the occupancy decoder further enhances the semantic discriminability of the compact OCC representation by a coarse-to-fine semantic grouping strategy. Empirical experiments show that there are evident performance gains across multiple baselines, e.g., COTR outperforms baselines with a relative improvement of 8%-15%, demonstrating the superiority of our method.

💡 Method

overview The overall architecture of COTR. T-frame surround-view images are first fed into the image featurizers to get the image features and depth distributions. Taking the image features and depth estimation as input, the geometry-aware occupancy encoder constructs a compact occupancy representation through efficient explicit-implicit view transformation. The semantic-aware group decoder utilizes a coarse-to-fine semantic grouping strategy cooperating with the Transformer-based mask classification to strongly strengthen the semantic discriminability of the compact occupancy representation.

🔧 Get Started

Installation and Data Preparation

step 1. Please prepare environment as that in Install.

step 2. Prepare nuScenes dataset as introduced in nuscenes_det.md and create the pkl for BEVDet by running:

python tools/create_data_bevdet.py

step 3. For Occupancy Prediction task, download (only) the 'gts' from CVPR2023-3D-Occupancy-Prediction and arrange the folder as:

└── nuscenes
    ├── maps  (existing)
    ├── v1.0-trainval (existing)
    ├── sweeps  (existing)
    ├── samples (existing)
    ├── lidarseg (existing)
    ├── panoptic (existing)
    └── gts (new)

Train model

  • single gpu(多卡分布式训练设置存在问题,报错 DDP 初始化异常)
CUDA_VISIBLE_DEVICES=0 python tools/train_occ.py $config
  • multiple gpu
CUDA_VISIBLE_DEVICES=0,1 ./tools/dist_train_occ.sh $config $num_gpu
# SurroundOcc + COTR
CUDA_VISIBLE_DEVICES=2,3 ./tools/dist_train_occ.sh configs/cotr/cotr-surroundocc-r50-4d-stereo-24e.py 2 --auto-resume
# BEVDet + COTR
CUDA_VISIBLE_DEVICES=2,3 ./tools/dist_train_occ.sh configs/cotr/cotr-bevdetocc-r50-4d-stereo-24e.py 2 --auto-resume

Test model

  • single gpu
CUDA_VISIBLE_DEVICES=0 python tools/test_occ.py $config $checkpoint --eval mIoU
  • multiple gpu
CUDA_VISIBLE_DEVICES=0,1 ./tools/dist_test_occ.sh $config $checkpoint $num_gpu --eval mIoU
# SurroundOcc + COTR
CUDA_VISIBLE_DEVICES=1 ./tools/dist_test_occ.sh configs/cotr/cotr-surroundocc-r50-4d-stereo-24e.py work_dirs/cotr-surroundocc-r50-4d-stereo-24e/epoch_12_ema.pth 1 --eval mIoU

Train & Test model

# multiple gpu
./train_eval_occ.sh $config $num_gpu

Visualize the predicted result.

方案1:先生成每帧的 npz 文件,再用 open3d 可视化(FlashOcc 方案)(推荐)
  • 依赖库
pip install open3d==0.15.2 setuptools==59.5.0 protobuf==3.20.0 tensorboard==2.12.0
  • 输出 npz 格式的结果文件
CUDA_VISIBLE_DEVICES=0 ./tools/dist_test.sh $config $checkpoint $num_gpu --eval mIoU --eval-options show_dir=$resultdir
# SurroundOcc + COTR
CUDA_VISIBLE_DEVICES=1 ./tools/dist_test.sh configs/cotr/cotr-surroundocc-r50-4d-stereo-24e.py work_dirs/cotr-surroundocc-r50-4d-stereo-24e/epoch_12_ema.pth 1 --eval mIoU --eval-options show_dir=work_dirs/cotr-surroundocc-r50-4d-stereo-24e/results/epoch_12_ema
  • 可视化保存为图像
CUDA_VISIBLE_DEVICES=0 python tools/analysis_tools/vis_occ.py $resultdir --save_path $visdir --draw-gt
# SurroundOcc + COTR
CUDA_VISIBLE_DEVICES=0 python tools/analysis_tools/vis_occ.py work_dirs/cotr-surroundocc-r50-4d-stereo-24e/results/epoch_12_ema --save_path work_dirs/cotr-surroundocc-r50-4d-stereo-24e/vis/epoch_12_ema --draw-gt
方案2:先生成整体的 pkl 文件,再用 mayavi 可视化(COTR 方案)
  • 依赖库
# 3090 服务器没有安装 Qt 系统库,无法运行
pip install vtk==9.0.1 configobj
pip install mayavi==4.7.3 PyQt5
  • 输出 pkl 格式的结果文件
CUDA_VISIBLE_DEVICES=0 ./tools/dist_test.sh $config $checkpoint $num_gpu --out $pklpath
# SurroundOcc + COTR
CUDA_VISIBLE_DEVICES=1 ./tools/dist_test.sh configs/cotr/cotr-surroundocc-r50-4d-stereo-24e.py work_dirs/cotr-surroundocc-r50-4d-stereo-24e/epoch_12_ema.pth 1 --out work_dirs/cotr-surroundocc-r50-4d-stereo-24e/results/epoch_12_ema_results.pkl
  • 可视化保存为图像
python tools/analysis_tools/vis_frame.py $pklpath $config --save-path $scenedir --scene-idx $sceneidx --vis-gt
# SurroundOcc + COTR
python tools/analysis_tools/vis_frame.py work_dirs/cotr-surroundocc-r50-4d-stereo-24e/results/epoch_12_ema_results.pkl configs/cotr/cotr-surroundocc-r50-4d-stereo-24e.py --save-path work_dirs/cotr-surroundocc-r50-4d-stereo-24e/vis --scene-idx 3 --vis-gt
  • 可视化创建 gif
python tools/analysis_tools/generate_gifs.py --scene-dir $scenedir

🙏 Acknowledgement

This project is not possible without multiple great open-sourced code bases. We list some notable examples below.

📃 Bibtex

If this work is helpful for your research, please consider citing the following BibTeX entry.

@article{ma2023cotr,
  title={COTR: Compact Occupancy TRansformer for Vision-based 3D Occupancy Prediction},
  author={Ma, Qihang and Tan, Xin and Qu, Yanyun and Ma, Lizhuang and Zhang, Zhizhong and Xie, Yuan},
  journal={arXiv preprint arXiv:2312.01919},
  year={2023}
}

About

[CVPR24] COTR: Compact Occupancy TRansformer for Vision-based 3D Occupancy Prediction

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.6%
  • Other 0.4%