Thanks to visit codestin.com
Credit goes to Github.com

Skip to content

ROS 2 Wrapper for "PENet: Towards Precise and Efficient Image Guided Depth Completion" ICRA2021-Paper

Notifications You must be signed in to change notification settings

chungpuonn/penet_ros2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

penet_ros2 (Mask R-CNN + PENet) — ROS 2 Humble

This package wraps the modified PENet into a ROS 2 node that:

  1. Projects incoming LiDAR PointCloud2 to a sparse depth map (camera frame).
  2. Runs Mask R-CNN (COCO-pretrained) internally on the incoming RGB image to generate target masks / score maps.
  3. Runs PENet depth completion.
  4. Publishes depth + multiple pointcloud outputs, including the fused (N,11) cloud matching your .npy layout.

Published topics

All pointcloud topics use 11 float32 fields in this exact order:

[x, y, z, intensity, r, g, b, label, score0, score1, score2]

  • label = 1 for pseudo points from completed depth
  • label = 2 for real LiDAR points
  • real LiDAR intensity is scaled by *10 to match your zip behavior
  • pseudo point RGB is image/3 (same as your zip)

1) Depth image (masked to segmented targets)

  • depth_topic (default /penet/depth_masked)
  • sensor_msgs/Image encoding 32FC1 (meters)
  • This is pred_depth * union_mask.

2) Masked RGB image (visual)

  • masked_image_topic (default /penet/masked_image)
  • sensor_msgs/Image encoding rgb8
  • Non-target pixels are darkened.

3) Score map image

  • score_map_topic (default /penet/score_map)
  • sensor_msgs/Image encoding 32FC3
  • Channels:
    • score0: car/truck (COCO labels 3,8)
    • score1: person (COCO label 1)
    • score2: cyclist proxy (COCO bicycle=2, plus motorcycle=4 if score2_use_motorcycle:=true)

4) Backprojected depth point cloud (camera frame)

  • depth_points_cam_topic (default /penet/depth_points_cam)
  • PointCloud2 (11 float32 fields), frame_id = incoming image frame
  • Uses completed depth backprojection.

5) Backprojected depth point cloud (camera frame, segmented targets only)

  • depth_points_cam_segmented_topic (default /penet/depth_points_cam_segmented)
  • PointCloud2 (11 float32 fields), filtered by max(score0..2) >= segmented_keep_thresh.

6) Fused point cloud (LiDAR + pseudo points, LiDAR frame)

  • fused_points_topic (default /penet/fused_points)
  • PointCloud2 (11 float32 fields), frame_id = incoming LiDAR frame
  • Pseudo points are transformed cam->lidar using T_lidar_to_cam inverse.

Build

cd ~/ros2_ws/src
# unzip this package folder here
cd ~/ros2_ws
colcon build --symlink-install
source install/setup.bash

Run

ros2 launch penet_ros2 penet_ros2.launch.py
# or
ros2 launch penet_ros2 penet_ros2.launch.py params:=/absolute/path/to/penet_ros2.yaml

Notes

  • This will try to load Mask R-CNN weights via torchvision. If your machine is offline and weights are not cached, it may fail.
  • Incoming image is resized to image_width x image_height if needed (default 1216x352).

Sample Results

  • The top image topic shows the original RGB camera image;
  • The middle image topic shows the segmented class target of the original RGB camera image;
  • The bottom image topic shows the segmented depth image (from the 2D projection from 3D-Lidar point cloud);

Sample Results 1 and 2

  • The image topic below shows the final depth completion results from the PENet;

Sample Result 3

Acknowledgement

PENet

  • Please follow the original GitHub repo in downloading the PENet and ENet pre-trained models checkpoints into the "penet_ros2/checkpoint/" folder

About

ROS 2 Wrapper for "PENet: Towards Precise and Efficient Image Guided Depth Completion" ICRA2021-Paper

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages