Thanks to visit codestin.com
Credit goes to Github.com

Skip to content

This program estimates the pose difference between the actual image and the image that the camera should be seeing in order to grasp an object with a robot gripper. It uses OpenCV's solvePnP algorithm and machine learning with YOLOv8.

Notifications You must be signed in to change notification settings

alantorresve/PnP

Repository files navigation

PnP Perspective-n-Point

This program estimates the pose difference between the actual image and the image that the camera should be seeing in order to grasp an object with a robot gripper. It uses OpenCV's solvePnP algorithm and machine learning with YOLOv8.

Installation

To install the program, follow these steps:

  1. Clone the repository: git clone https://github.com/alantorresve/PnP
  2. Install the required dependencies: pip install -r requirements.txt

Camera Calibration

Before using the program, it is necessary to perform camera calibration. The SolvePnP function requires the camera intrinsic parameters, such as the camera matrix and distortion coefficient matrix. These parameters are stored as a numpy array and then imported to the main code.

To perform camera calibration, follow these steps:

  1. Install the required dependencies mentioned in the Installation section.
  2. Select the type of calibration to perform based on the chosen pattern.
  3. Print the pattern and stick it onto a planar surface or object.
  4. Run the calibration code that corresponds to the chosen pattern.
  5. Perform real-time calibration by selecting the desired camera (change the number of the video capture from 0 to the desired camera).
  6. Once the calibration is completed, you can see a calibration 'score' to check if it was done properly.
  7. Use the calibrationcheck.py script to visually check if the image was undistorted successfully using the results obtained in the calibration process.
  8. After ensuring that the calibration was successful, you can proceed with the other steps.

Usage

To use the program, follow these steps:

  1. Make sure you have installed the required dependencies as mentioned in the Installation section.
  2. Execute the main code: python main.py

Custom Training Procedure

To perform a custom training procedure for the project, follow these steps:

  1. Modify the config.yaml file: Open the config.yaml file located in the mlcubedetection folder. Make the necessary modifications to the configuration parameters according to your requirements. Note: Ensure that the path directory in the config.yaml file is modified to the absolute path directory, otherwise the program won't work correctly.
  2. Prepare the dataset: Ensure that your dataset is properly organized and labeled. Place the dataset files in the appropriate directory specified in the config.yaml file.
  3. Execute the train.py file: Run the train.py file located in the mlcubedetection folder. This script is responsible for training the model using the provided dataset and configuration.

Requirements

The following dependencies are required to run the program:

  • Python 3.11.5
  • matplotlib==3.8.1
  • numpy==1.24.1
  • opencv_python==4.8.1.78
  • opencv_python_headless==4.8.0.74
  • ultralytics==8.0.222

You can install these dependencies by running pip install -r requirements.txt.

About

This program estimates the pose difference between the actual image and the image that the camera should be seeing in order to grasp an object with a robot gripper. It uses OpenCV's solvePnP algorithm and machine learning with YOLOv8.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages