Zisong Xu1, Rafael Papallas1, 2, Jaina Modisett1, Markus Billeter1, Mehmet R. Dogar1
1University of Leeds, 2American University of Beirut - Mediterraneo
This is the official implementation of the paper, Tracking and Control of Multiple Objects during Non-Prehensile Manipulation in Clutter, to be accepted by The IEEE Transactions on Robotics (T-RO).
Abstract: We propose a method for 6D pose tracking and control of multiple objects during non-prehensile manipulation by a robot. The tracking system estimates objects' poses by integrating physics predictions, derived from robotic joint state information, with visual inputs from an RGB-D camera. Specifically, the methodology is based on particle filtering, which fuses control information from the robot as an input for each particle movement and with real-time camera observations to track the pose of objects. Comparative analyses reveal that this physics-based approach substantially improves pose tracking accuracy over baseline methods that rely solely on visual data, particularly during manipulation in clutter, where occlusions are a frequent problem. The tracking system is integrated with a model predictive control approach which shows that the probabilistic nature of our tracking system can help robust manipulation planning and control of multiple objects in clutter, even under heavy occlusions.
@article{xu2025tracking,
title={Tracking and Control of Multiple Objects during Non-Prehensile Manipulation in Clutter},
author={Xu, Zisong and Papallas, Rafael and Modisett, Jaina and Billeter, Markus and Dogar, Mehmet R},
journal={IEEE Transactions on Robotics},
volume={41},
pages={3929--3947},
year={2025},
publisher={IEEE}
}Click to watch the video.
We recommend using the Singularity container provided in our codebase (see the Singularity installation guide) to run the PBPF algorithm.
-
Download Code
user@pcName:~/<repo_dir>$ git clone --recurse [email protected]:ZisongXu/PBPF.git
-
Build and Run Container
user@pcName:~/<repo_dir>$ cd PBPF user@pcName:~/<repo_dir>/PBPF$ ./build.sh user@pcName:~/<repo_dir>/PBPF$ ./run.sh [PBPF] Singularity> ~/catkin_ws $ cd ~ [PBPF] Singularity> ~ $
Press
ctrl+Dto exit the[PBPF]container. -
Download and Setup Rendering Code
user@pcName:~/<repo_dir>/PBPF$ cd home user@pcName:~/<repo_dir>/PBPF/home$ git clone --recurse [email protected]:billeter/pyvkdepth.git user@pcName:~/<repo_dir>/PBPF/home$ cd .. user@pcName:~/<repo_dir>/PBPF$ ./run.sh [PBPF] Singularity> ~ $ cd pyvkdepth [PBPF] Singularity> ~/pyvkdepth $ ./premake5 gmake2 [PBPF] Singularity> ~/pyvkdepth $ make -j8 or make -j8 config=release_x64
-
Prepare Scripts
Move the files from the
sh_scriptsfolder in repo'shomedirectory to thepyvkdepthfolder in thehomedirectory.Origianl directory:
./PBPF/home ├── catkin_ws ├── project ├── pyvkdepth ├── sh_scripts │ ├── automated_experiments.sh │ ├── get_info_from_rosbag.py │ └── update_yaml_file_automated.py ├── .bashrc ├── .cache └── .localNew directory:
./PBPF/home ├── catkin_ws ├── project ├── pyvkdepth │ ├ ... │ ├── automated_experiments.sh │ ├── get_info_from_rosbag.py │ └── update_yaml_file_automated.py ├── sh_scripts # empty ├── .bashrc ├── .cache └── .local -
Download Rosbags (For running demos only)
[PBPF] Singularity> ~/pyvkdepth $ mkdir rosbag
Download the rosbags (approximate 2.6TB). If you can not access the URL, please contact us ([email protected]/[email protected]). Put the rosbags into the
./PBPF/home/pyvkdepth/rosbagfolder. Using2_scene1_crackersoup1.bagas an example, you will get./PBPF/home/pyvkdepth/rosbag/2_scene1_crackersoup1.bag.
-
Enter into the Container
user@pcName:~/<repo_dir>/PBPF$ ./run.sh [PBPF] Singularity> ~ $
-
Start ROS Master
[PBPF] Singularity> ~ $ roscore
-
Using Simulation Time (Only for using rosbags to run the code)
user@pcName:~/<repo_dir>/PBPF$ ./run.sh [PBPF] Singularity> ~ $ rosparam set use_sim_time true
-
Start Running (Only for using rosbags to run the code)
user@pcName:~/<repo_dir>/PBPF$ ./run.sh [PBPF] Singularity> ~ $ cd pyvkdepth [PBPF] Singularity> ~/pyvkdepth $ ./automated_experiments.sh
-
Visualization Window
user@pcName:~/<repo_dir>/PBPF$ ./run.sh [PBPF] Singularity> ~ $ rosrun PBPF Visualisation_World_Particle.py
The above steps cover the entire process of running the code, but to ensure it runs smoothly, you need to make sure the file configurations are correct:
./PBPF/home/catkin_ws/src/PBPF/config/parameter_info.yaml./PBPF/home/project/object./PBPF/home/pyvkdepth/tests/bake.py
-
Prepare the
object.objof new object (you can also prepare theobject.mtlandobject.pngto illustrate textures, not necessarily). -
Put them into the
./PBPF/home/pyvkdepth/assets-src/meshes/folder. We have provided meshes of some objects, you can find them under the./PBPF/home/project/meshes_for_render/folder, and move them to the./PBPF/home/pyvkdepth/assets-src/meshes/folder. You can find corresponding examples in the./PBPF/home/pyvkdepth/assets-src/meshes/folder. -
Compress the
.objfile into a.obj.zstfile:[PBPF] Singularity> ~/pyvkdepth/assets-src/meshes $ zstd object.obj -o object.obj.zst
-
Add the following code to the
./PBPF/home/pyvkdepth/tests/bake.pyscript to generate files for rendering,bake_obj( "assets/meshes/object.vkdepthmesh", "assets-src/meshes/object.obj.zst" ); bake_obj( "assets/meshes/object-red.vkdepthmesh", "assets-src/meshes/object.obj.zst", aSimplifyTarget = 0.1, aSimplifyMaxErr = 0.01 );
then run
[PBPF] Singularity> ~/pyvkdepth/assets-src/meshes $ ./tests/bake.py
then you can find
object.vkdepthmeshandobject-red.vkdepthmeshfiles under the./PBPF/home/pyvkdepth/assets/meshesfolder. -
Create the model related to
objectbased onobject.objandobject.mtl, and place it in the./PBPF/home/project/objectfolder. You can find corresponding examples in this folder. -
Modify the object_name_list and object_num parameters in the
./PBPF/home/catkin_ws/src/PBPF/config/parameter_info.yaml. Assign the names of the new objects toobject_name_list, and assign the number of new objects toobject_num. For example:object_name_list: - cracker - soup object_num: 2
-
Need to ensure that the names of the new objects are consistent.
-
Update the Rendered Images
Ensure consistency between the rendered images and the real environment by modifying the
_vk_load_target_objects_meshes,_vk_load_robot_meshes,_vk_load_env_objects_meshes, and_vk_state_settingfunctions in the./PBPF/home/catkin_ws/src/PBPF/scripts/Physics_Based_Particle_Filtering.pyscripts.- Function
_vk_load_target_objects_meshesis used to update the target object in the rendered image. - Function
_vk_load_robot_meshesis used to update the robot in the rendered image. - Function
_vk_load_env_objects_meshesis used to update the environment in the rendered image. - Function
_vk_state_settingis used to set the states (6D poses) to all objects in the rendered image.
Each function comes with corresponding examples to help you understand how it works.
- Function
-
Update the Physical Simulation Environment
Modify the environment configuration (6D poses only) in
./PBPF/home/catkin_ws/src/PBPF/scripts/Create_Scene.py, and then modify the physics simulation environment in./PBPF/home/catkin_ws/src/PBPF/scripts/PyBulletEnv.py.
- We provide a base class interface in
./PBPF/home/catkin_ws/src/PBPF/scripts/PhysicsEnv.pyand a subclass implementation in./PBPF/home/catkin_ws/src/PBPF/scripts/PyBulletEnv.py. If you wish to use a different physics simulation engine (such as MuJoCo or IsaacSim), you can create a new subclass that inherits from./PBPF/home/catkin_ws/src/PBPF/scripts/PhysicsEnv.pyand implement the corresponding functions as needed.