Thanks to visit codestin.com
Credit goes to github.com

Skip to content

roth-hex-lab/IEEE-VR-2024-GBOT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GBOT: Graph-Based 3D Object Tracking for Augmented Reality-Assisted Assembly Guidance

Publication

Official code of paper GBOT: Graph-Based 3D Object Tracking for Augmented Reality-Assisted Assembly Guidance

Introduction

Guidance for assemblable parts is a promising field for the use of augmented reality. Augmented reality assembly guidance requires 6D object poses of target objects in real-time. Especially in time-critical medical or industrial settings, continuous and markerless tracking of individual parts is essential to visualize instructions superimposed on or next to the target object parts. In this regard, occlusions by the user's hand or other objects as well as the complexity of different assembly states complicate robust and real-time markerless multi-object tracking.


GBOT presentation

Model download links

To make the results reproducable, all the models used in GBOT dataset can be downloaded and 3D printed.

LiftPod - Multipurpose Foldable Stand https://www.thingiverse.com/thing:4614448

Nano Chuck by PRIma https://www.thingiverse.com/thing:5178901 imported in Blender and scaled up by factor 2

Hand-Screw Clamp https://www.thingiverse.com/thing:2403756 imported in Blender and scaled up by factor 2

Geared Caliper https://www.thingiverse.com/thing:3006884/files

Hobby Corner Clamp / Angle Presser Vice Fully 3D Printable https://www.thingiverse.com/thing:1024366 imported in Blender and scaled up by factor 1.2

GBOT dataset

Training data - Part 1

Training data - Part 2

Training data - Part 3

Test data

Object Detection and 6D Pose Estimation

1. Install pytorch from https://pytorch.org/ by using the command provided there. Installation of pytorch before Ultralytics is important!

2. Install YOLOv8 by using:

pip install ultralytics

For further instructions see: https://github.com/ultralytics/ultralytics

Additional packages will probably also be necessary but just install them based on their error message.

3. Probably you have to change the yolo settings. Use in the command line:

yolo settings

to get the path where the settings are located and then change the "datasets_dir", "weights_dir" and "runs_dir" to your needs. Recommended where PATH_TO_PROJECT is the Path where the project is located on your machine:

datasets_dir: PATH_TO_GBOTdatasets
weights_dir: PATH_TO_PROJECT\yolov8\weights  
runs_dir: PATH_TO_PROJECT\yolov8\runs 

4. Start training with the script (You can skip this step if you want to use our pretrained models):

Train 6D object pose estimator with YOLOv8pose:

python yolov8/yolov8_pose_training.py

After the training, export the model into onnx format:

from ultralytics import YOLO
model = YOLO('yolov8pose.pt')  # load an official model
model = YOLO('path/to/best.pt')  # load a custom trained model
model.export(format='onnx')

5. Predict 6D object pose with YOLOv8

Download our pretrained models in onnx format and save the pretrained models in the folder: yolov8pose/pretrained

You can predict 6D object poses with the script:

python yolov8pose/yolo_to_pose_prediction_cv2.py

Graph-base Object Tracking

Build

Use CMake to build the library from source. The following dependencies are required: Eigen 3, GLEW, GLFW 3, and OpenCV 4. In addition, unit tests are implemented using gtest, while images from an Azure Kinect or RealSense camera can be streamed using the K4A and realsense2 libraries. All three libraries are optional and can be disabled using the CMake flags USE_GTEST, USE_AZURE_KINECT, and USE_REALSENSE. If OpenCV 4 is installed with CUDA, feature detectors used in the texture modality are able to utilize the GPU. If CMake finds OpenMP, the code is compiled using multithreading and vectorization for some functions. Finally, the documentation is built if Doxygen with dot is detected. Note that links to pages or classes that are embedded in this readme only work in the generated documentation. After a correct build, it should be possible to successfully execute all tests in ./gtest_run. For maximum performance, ensure that the library is created in Release mode, and, for example, use -DCMAKE_BUILD_TYPE=Release. Server connected with HOLOLENS is implemented by Restful API.

Evaluation

The code in gbot_tracking/examples/evaluate_gbot_dataset.cpp contains file for model inference and tracking, results of 6D pose will be saved to result_pose_path according to your setting.

We use bop toolkit to evaluate the final results, please install bop toolkit from https://github.com/thodan/bop_toolkit.

Evaluate accuracy of 6d pose run:

python evaluation/eval_calc_errors.py

Visualize estimated pose run:

python evaluation/vis_est_poses.py

Demo

The code in gbot_tracking/examples/run_assembly_demo.cpp contains file for real-time demo.

Acknowledgement

We sincerely thank YOLOv8, m3t for providing their wonderful code to the community!

Citations

If you find GBOT is useful in your research or applications, please consider giving us a star 🌟 and citing it.

@inproceedings{li2024gbot,
  title={GBOT: graph-based 3D object tracking for augmented reality-assisted assembly guidance},
  author={Li, Shiyu and Schieber, Hannah and Corell, Niklas and Egger, Bernhard and Kreimeier, Julian and Roth, Daniel},
  booktitle={2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)},
  pages={513--523},
  year={2024},
  organization={IEEE}
}

Licence

GBOT is under the MIT Licence and is supported for commercial usage. If you need a commercial license for GBOT, please feel free to contact us.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •