Paper | Project Page | Dataset
Robotics: Science and Systems (RSS) 2025
We introduce PartInstruct, the first large-scale benchmark for training and evaluating fine-grained robot manipulation policies using part-level instructions.
git clone --recurse-submodules https://github.com/SCAI-JHU/PartInstruct.git
cd PartInstruct
git submodule sync --recursive
git submodule update --init --recursive
conda create -n partinstruct -c conda-forge python=3.9 cmake=3.24.3 open3d ninja gcc_linux-64=12 gxx_linux-64=12
conda activate partinstruct
pip install torch torchvision torchaudioInstall third-party
pip install -r requirements.txt
pip install -e .
pip install -e ./third_party/pybullet_planning/
pip install -e ./third_party/diffusion_policy/
pip install -e ./third_party/3D-Diffusion-Policy/
pip install -e ./third_party/gym-0.21.0/
pip install -e ./third_party/pytorch3d/
pip install -e ./third_party/sam_2/Go to the dataset page: https://huggingface.co/datasets/SCAI-JHU/PartInstruct. Log in to your Hugging Face account and accept the conditions as prompted. Then go back to the project root directory, log in from your terminal.
huggingface-cli loginEnter your password. You can now download the assets. The following commands download and set up the assets under a created data/ directory.
cd ./PartInstruct
huggingface-cli download SCAI-JHU/PartInstruct --repo-type dataset --local-dir ./data --include "*.json" "assets.zip" "checkpoints/**"
#To download PartInstruct dataset in hdf5 format, add "demos/**" for all demo, "demos/OBJECT_NAME.hdf5" for demo of specific object type
unzip ./data/assets.zip -d ./data/ && rm data/assets.zipDownload checkpoints of SAM-2 (Use in Bi-level Planning)
cd ./third_party/sam_2/checkpoints/
bash download_ckpts.sh
cd ../../../This command will sample part-level manipulation tasks from the evaluation metadata and execute the tasks using an Oracle planner:
python scripts/run_oracle_policy.pyTo evaluate Code as Policies with GPT4o, use the following command to set up your OpenAI API key as an environmental variable:
export OPENAI_API_KEY=your_openai_api_keyThen run the following command:
python scripts/run_code_as_policies.py# After downloading dataset, run
bash scripts/slurm_scripts/train_DP3.sh
bash scripts/slurm_scripts/train_DP3.sh
# to start DDP training of DP3/DP baseline# Set the OpenAI key if haven't
export OPENAI_API_KEY=your_openai_api_key
# Run evaluation
# e.g. Run Evaluation with Diffusion Policy (DP-S), use groundtruth part-mask (bullet_env)
python PartInstruct/baselines/evaluation/evaluator.py \
--config-name DP-S_evaluator \
rollout_mode='specific_ckpt' \
split='test1' \
ckpt_path=PartInstruct/data/checkpoints/DP-S/latest.ckpt \
output_dir=PartInstruct/outputs/DP-S \
task.env_runner.bullet_env=PartInstruct.PartGym.env.bullet_env \
task.env_runner._target_=PartInstruct.baselines.evaluation.env_runner.dp_env_runner.DPEnvRunner \
task.env_runner.n_envs=1 \
task.env_runner.n_vis=1
# e.g. Run Evaluation with Diffusion Policy (DP-S), use SAM 2 for part-mask (bullet_env_sam)
python PartInstruct/baselines/evaluation/evaluator.py \
--config-name DP-S_evaluator \
rollout_mode='specific_ckpt' \
split='test1' \
ckpt_path=PartInstruct/data/checkpoints/DP-S/latest.ckpt \
output_dir=PartInstruct/outputs/DP-S \
task.env_runner.bullet_env=PartInstruct.PartGym.env.bullet_env_sam \
task.env_runner._target_=PartInstruct.baselines.evaluation.env_runner.dp_env_runner.DPEnvRunner \
task.env_runner.n_envs=1 \
task.env_runner.n_vis=1
# e.g. Run Evaluation with Diffusion Policy (DP-S), use bi-level framework (bullet_env_sam_gpt)
python PartInstruct/baselines/evaluation/evaluator.py \
--config-name DP-S_evaluator \
rollout_mode='specific_ckpt' \
split='test1' \
ckpt_path=PartInstruct/data/checkpoints/DP-S/latest.ckpt \
output_dir=PartInstruct/outputs/DP-S \
task.env_runner.bullet_env=PartInstruct.PartGym.env.bullet_env_sam_gpt \
task.env_runner._target_=PartInstruct.baselines.evaluation.env_runner.dp_gpt_env_runner.GPTEnvRunner \
task.env_runner.n_envs=1 \
task.env_runner.n_vis=1
# For DP3-S, replace the DP-S to DP3-S, dp_env_runner to dp3_env_runnerIf you find our work useful in your research, please consider citing:
@inproceedings{yin2025partinstruct,
title={PartInstruct: Part-level Instruction Following for Fine-grained Robot Manipulation},
author={Yin, Yifan and Han, Zhengtao and Aarya, Shivam and Xu, Shuhang and Wang, Jianxin and Peng, Jiawei and Wang, Angtian and Yuille, Alan and Shu, Tianmin},
booktitle={Proceedings of Robotics: Science and Systems (RSS)},
year={2025}
}