This package wraps the modified PENet into a ROS 2 node that:
- Projects incoming LiDAR
PointCloud2to a sparse depth map (camera frame). - Runs Mask R-CNN (COCO-pretrained) internally on the incoming RGB image to generate target masks / score maps.
- Runs PENet depth completion.
- Publishes depth + multiple pointcloud outputs, including the fused (N,11) cloud matching your
.npylayout.
All pointcloud topics use 11 float32 fields in this exact order:
[x, y, z, intensity, r, g, b, label, score0, score1, score2]
label = 1for pseudo points from completed depthlabel = 2for real LiDAR points- real LiDAR
intensityis scaled by*10to match your zip behavior - pseudo point RGB is
image/3(same as your zip)
depth_topic(default/penet/depth_masked)sensor_msgs/Imageencoding32FC1(meters)- This is
pred_depth * union_mask.
masked_image_topic(default/penet/masked_image)sensor_msgs/Imageencodingrgb8- Non-target pixels are darkened.
score_map_topic(default/penet/score_map)sensor_msgs/Imageencoding32FC3- Channels:
score0: car/truck (COCO labels 3,8)score1: person (COCO label 1)score2: cyclist proxy (COCO bicycle=2, plus motorcycle=4 ifscore2_use_motorcycle:=true)
depth_points_cam_topic(default/penet/depth_points_cam)PointCloud2(11 float32 fields),frame_id= incoming image frame- Uses completed depth backprojection.
depth_points_cam_segmented_topic(default/penet/depth_points_cam_segmented)PointCloud2(11 float32 fields), filtered bymax(score0..2) >= segmented_keep_thresh.
fused_points_topic(default/penet/fused_points)PointCloud2(11 float32 fields),frame_id= incoming LiDAR frame- Pseudo points are transformed cam->lidar using
T_lidar_to_caminverse.
cd ~/ros2_ws/src
# unzip this package folder here
cd ~/ros2_ws
colcon build --symlink-install
source install/setup.bashros2 launch penet_ros2 penet_ros2.launch.py
# or
ros2 launch penet_ros2 penet_ros2.launch.py params:=/absolute/path/to/penet_ros2.yaml- This will try to load Mask R-CNN weights via torchvision. If your machine is offline and weights are not cached, it may fail.
- Incoming image is resized to
image_width x image_heightif needed (default 1216x352).
- The top image topic shows the original RGB camera image;
- The middle image topic shows the segmented class target of the original RGB camera image;
- The bottom image topic shows the segmented depth image (from the 2D projection from 3D-Lidar point cloud);
- The image topic below shows the final depth completion results from the PENet;
- Please follow the original GitHub repo in downloading the PENet and ENet pre-trained models checkpoints into the "penet_ros2/checkpoint/" folder