This repository contains the code used to develop HOPNet, part of the publication
titled "Integrating physics and topology in neural networks for learning rigid body dynamics".
This work was published in Nature Communications 16, Article number 6867 (2025). Read the full article here: https://doi.org/10.1038/s41467-025-62250-7. BibTeX reference below.
- Integrating physics and topology in neural networks for learning rigid body dynamics
- Operating system: Windows 11 or Ubuntu 22.04
- Python 3.10 or 3.11 (developed and tested on 3.10.12 and 3.11.12)
To train and evaluate a HOPNet model, a HOPNet ablation, or FIGNet reimplemented, you can use either the Docker image or the Manual Setup. To interactively visualize rollout trajectories with the included Jupyter notebook, you will need to use the Manual Setup.
Note: The proposed setup here does not contain all the dependencies to generate the
MOVi datasets. The dataset generation must be done with the
Kubric Docker image (due to its complex
dependencies on PyBullet and Blender).
Note: The Docker image can only be built on x86_64 architectures, not arm64. This is due to the gudhi package having pre-built images only for Linux x86_64 (not Linux arm64). Alternatively, you could build the package from source inside the Dockerfile for arm64 platforms.
- Build the
hopnetDocker image included in this repository with the following command:
docker build -t hopnet:latest .- Create a Docker container using the
hopnetDocker image you just built:
# On Windows Powershell
docker run --name hopnet -d -v ${PWD}:/workspace/hopnet/ hopnet
# On Linux
docker run --name hopnet -d -v ${pwd}:/workspace/hopnet/ hopnet- Start the Docker container and get an interactive terminal:
# Not needed if you just created the container
docker start hopnet
# Get an interactive terminal inside the container
docker exec -it hopnet bash- Use the Docker container for training or inference:
# This hopnet repo is connected to the Docker container in /workspace/hopnet
cd /workspace/hopnet
# Run any python command you want (see "Getting Started" chapter 3)
...- Stop the Docker container:
docker stop hopnet- Delete the Docker container and
hopnetimage:
# Delete the container (the hopnet Docker image will remain)
docker rm hopnet
# Delete the hopnet Docker image
docker rmi hopnet- Create a new Python virtual environment (must be Python 3.11):
# Create the virtual environment named "hopnet_venv"
python -m venv hopnet_venv
# Activate it
./hopnet_venv/Scripts/activate
# Update package installation tools
python -m pip install -U pip wheel setuptools- Install TopoModelX commit 2267768:
# Clone the TopoModelX repository
git clone https://github.com/pyt-team/TopoModelX
cd TopoModelX
git checkout 226776822925b0984b1c5dbd097234d7fcbc274e
# Install the TopoModelX package
pip install -e '.[all]'- Install the rest of the Python dependencies:
# First, get back to the root folder
cd ..
# Then, install the HOPNet dependencies sequentially
pip install -r requirements.txt
pip install torch-scatter torch-sparse torch-cluster -f https://data.pyg.org/whl/torch-2.4.1+cu124.html
# Finally, install the FIGNet reimplemented dependencies
sudo apt install libsm6 libxext6 libgl1-mesa-glx libosmesa6 # Linux only
export MUJOCO_GL="osmesa" # Put this in your .bashrc or .zshrc
export PYOPENGL_PLATFORM="osmesa" # Put this in your .bashrc or .zshrc
cd fignet
pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
pip install -r requirements.txt
pip install -e .
python -m robosuite.scripts.setup_macrosThis chapter contains detailed instructions to replicate end-to-end the reported results from scratch. However, to enable quick testing, pretrained models are provided as checkpoints.
If you directly want to compute results and generate rollout trajectories, you can move directly to section Inferring a Model and Generating Rollout Trajectories.
This repository includes 10 samples of each dataset MOVi-spheres, MOVi-A, and
MOVi-B. You can download the complete datasets in the Zenodo database under
accession code https://doi.org/10.5281/zenodo.15800434.
To generate the complete datasets with 1'200 samples each from scratch, build the
Kubric Docker image and generate additional
samples with the custom script /data/generate_movi_dataset.py.
# Generate the MOVi-spheres dataset (must be ran inside the Kubric Docker image)
python generate_dataset.py --movis --out_dir=./movis
# Generate the MOVi-A dataset (must be ran inside the Kubric Docker image)
python generate_dataset.py --movia --out_dir=./movis
# Generate the MOVi-B dataset (must be ran inside the Kubric Docker image)
python generate_dataset.py --movib --out_dir=./movibEach dataset sample corresponds to one individual random seed from 1 to 1'200. Each sample generated by Kubric contains three files:
metadata.json: contains all the necessary informationevents.json: file automatically generated by Kubric, not required, not usedrgba.mp4: video of the evolving objects, for visualization purposes only
Warning: Generating the complete 1'200 samples for each dataset can take a lot of time depending on your hardware (up to multiple days).
Note: The samples provided under /samples also contain collisions.json files
(generated with a collision radius
Once the dataset has been generated, combinatorial complexes (CCs) and incidence matrices must be pre-computed to train a model. This step is only required for training and inferring models, and not for autoregressive rollout.
All versions of HOPNet use the same combinatorial complexes and incidence matrices.
However, the ablated version of HOPNet (without object cells
To speed-up the generation of different combinatorial complexes, the pre-computed
collisions by the data/create_ccs.py script are stored in collisions.json files.
# Usage
python data/create_ccs.py <dataset_dir> [--nox4]
# Example: generate CCs for standard HOPNet with d_c = 0.1
python data/create_ccs.py ./samples/MOVi-A --collision_radius=0.1
# Example: generate CCs for ablated HOPNet without object cells with d_c=0.1
python data/create_ccs.py ./samples/MOVi-A --collision_radius=0.1 --nox4Warning: Generating the combinatorial complexes archives for each dataset can take a
lot of time depending on your hardware (up to multiple days). This is especially true
for the MOVi-B dataset, as object meshes are very complex.
The normalization parameters are based on global dataset statistics and required for training, inference, and rollout generation.
To avoid the need of generating all datasets to test pretrained models, the
normalization parameters are already included inside each dataset folder in the
/samples/ directory.
However, if you want to generate them yourselves, you can use the included script /data/compute_normalization.py:
# Usage
python data/compute_normalization.py <dataset_dir> [--nox4]
# Example: compute the normalization parameters on MOVi-spheres for standard HOPNet
python data/compute_normalization.py ./samples/MOVi-spheres --nox4To train a HOPNet model (including ablated versions), use the scripts/main.py script.
It supports different learning rates, epochs, and includes
Tensorboard and
Weights & Biases monitoring.
Model checkpoints are saved at the end of each epoch in the log directory provided as
argument. The names are structured as follows:
<MODEL>_c<EMBEDDING>_l<MP_LAYERS>_mlp<MLP_LAYERS>_e<EPOCH_NUM>.pt.
# Usage
python scripts/main.py <dataset_dir> --log_dir <log_dir> --model=<model> --normalization=<normalization_path>
# Example: train a HOPNet model on the included samples for 40 epochs
python scripts/main.py ./samples/MOVi-spheres --log_dir ./tmp/hopnet_movis --model=HOPNet --epochs=40 --normalization=./samples/normalization-movis.npy
# Example: train an ablated HOPNet model without object cells
python scripts/main.py ./samples/MOVi-A --log_dir ./tmp/hopnet_movianox4 --model=NoObjectCells --epochs=40 --normalization=./samples/normalization-movia-nox4.npyWarning: Due to the dynamic nature of the combinatorial complexes, the main training bottleneck is CPU and disk speed rather than GPU memory size. Training a model (especially ablated versions) can take up to a few days depending on your hardware.
To compute the autoregressive rollout of a model and its RMSE errors, use the
scripts/main.py script. The computed autoregressive trajectories are saved as
<SAMPLE_ID>.npy files inside the log directory provided as argument. The autoregressive
rollout is only performed on the testing set.
After computing the autoregressive trajectories, a rollout.npy file containing all
RMSE errors is computed. To visualize and plot the errors' evolution over time, use the
/notebooks/visualize_errors.ipynb Jupyter notebook.
# Usage
python scripts/main.py <dataset_dir> --log_dir <log_dir> --model=<model> --checkpoint=<pt_path> --normalization=<npy_path> --rollout
# Example: test a pretrained HOPNet model on the included samples
python scripts/main.py ./samples/MOVi-spheres --log_dir ./tmp/hopnet_movis/e39/ --model=HOPNet --checkpoint=./checkpoints/models_seed0_e39.pt --normalization=./samples/normalization-movis.npy --rollout
# Example: train an ablated HOPNet model without object cells
python scripts/main.py ./samples/MOVi-A --log_dir ./tmp/hopnet_movianox4/e39/ --model=NoObjectCells --checkpoint=./checkpoints/movia-nox4_seed0_e39.pt --normalization=./samples/normalization-movia-nox4.npy --rolloutTo visualize autoregressive rollout trajectories of a model, use the
/notebooks/visualize_rollout.ipynb Jupyter notebook. Make sure to match the model with
its configuration.
We supply 4 pretrained checkpoint models used to compute the results in the main article
and supplementary information. They are available in the /checkpoints directory. These
models can be tested with both the /scripts/main.py script and the
/notebooks/visualize_rollout.ipynb Jupyter notebook.
Here are more details regarding their training, hyperparameters, and normalization files:
| Checkpoint | Model | Training set | Normalization | Activation | Channels | Layers | MLP layers |
|---|---|---|---|---|---|---|---|
models_seed0_e39.pt |
HOPNet | MOVi-spheres | normalization-movis.npy |
ReLU | 128 | 1 | 2 |
models_seed2_e39.pt |
HOPNet | MOVi-spheres | normalization-movis.npy |
ReLU | 128 | 1 | 2 |
modela_seed2_e39.pt |
HOPNet | MOVi-A | normalization-movia.npy |
ReLU | 128 | 1 | 2 |
modelb_seed0_e39.pt |
HOPNet | MOVi-B | normalization-movib.npy |
ReLU | 128 | 1 | 2 |
movib-alt_seed0_e39.pt |
HOPNet | MOVi-B (alt) | normalization-movib.npy |
ReLU | 128 | 1 | 2 |
movia-nox4_seed0_e39.pt |
Ablation (no object cells) | MOVi-A | normalization-movia-nox4.npy |
ReLU | 128 | 1 | 2 |
movia-noseq_seed1_e39.pt |
Ablation (no physics-informed MP) | MOVi-A | normalization-movia.npy |
ReLU | 64 | 3 | 2 |
gelu_seed0_e39.pt |
Supplemental (activation function) | MOVi-spheres | normalization-movis.npy |
GELU | 128 | 1 | 2 |
For a detailed overview of the message-passing architecture of our HOPNet model, please refer to the figure available below.
The fignet/ folder contains an unofficial reimplementation of
FIGNet. It has been taken from
https://github.com/jongyaoY/fignet (commit
f72f693c4a30ba30bcd4fccaf8bad76d92ab2a17)
and modified to support the MOVi datasets used in HOPNet.
It is possible to train and evaluate FIGNet reimplementations on the MOVi datasets using the following instructions.
Note: None of the core of the reimplementation has been modified (model, training
strategy, collision processing, ...). All experiments from the
original reimplementation are still supported and
training configurations untouched. The only changes are to support the MOVi dataset,
enhance the validation step (to compute one-step errors on the full validation set) and
the addition of a new infer.py script to output model
rollout predictions to .npy files (for error computation).
First, you should modify the dataset location inside the configuration files under
fignet/config:
# fignet/config/train_movis.yaml
data_path: "<YOUR_MOVI_SPHERES_ROOT_DIRECTORY>"
test_data_path: "<YOUR_MOVI_SPHERES_ROOT_DIRECTORY>"
...
# fignet/config/train_movia.yaml
data_path: "<YOUR_MOVI_A_ROOT_DIRECTORY>"
test_data_path: "<YOUR_MOVI_A_ROOT_DIRECTORY>"
# fignet/config/train_movib.yaml
data_path: "<YOUR_MOVI_B_ROOT_DIRECTORY>"
test_data_path: "<YOUR_MOVI_B_ROOT_DIRECTORY>"The MOVi datasets need to be preprocessed before training (not required for inference):
cd fignet
# Example: preprocess the MOVi-spheres dataset
python scripts/preprocess_data_movi.py --config_file config/train_movis.yaml --dataset_path <YOUR_MOVI_SPHERES_ROOT_DIRECTORY> --num_workers <CPU_CORES>You can train a FIGNet model from scratch using the scripts/train.py
script. Change the --config_file argument if you want to train on another dataset
(MOVi-A or MOVi-B).
# Exapmle: Train a model on MOVi-spheres
python scripts/train.py --config_file ./config/train_movis.yamlNote: Pretrained checkpoints for each of MOVi dataset are given in the
fignet/checkpoints directory.
To compute the autoregressive rollouts of a pretrained FIGNet model on a MOVi dataset, use the following command:
# Exapmle: Infer a model on MOVi-spheres
python scripts/infer.py --config_file ./config/train_movis.yaml --checkpoint ./checkpoints/fignets_seed2_1M.ptA rollout/ folder will be created next to the pretrained model checkpoint.
Finally, to compute the rollout errors, use the main.py script from the HOPNet
codebase as follows:
cd .. # Go back to the root of the repository
python scripts/main.py <YOUR_MOVI_DATASET_ROOT_DIRECTORY> --log_dir <ROLLOUT_DIRECTORY> --errors@Article{Wei2025,
author={Wei, Amaury and Fink, Olga},
title={Integrating physics and topology in neural networks for learning rigid body dynamics},
journal={Nature Communications},
year={2025},
month={Jul},
day={25},
volume={16},
number={1},
pages={6867},
issn={2041-1723},
doi={10.1038/s41467-025-62250-7},
url={https://doi.org/10.1038/s41467-025-62250-7}
}