Toward Material-Agnostic System Identification from Videos
ICCV 2025
Yizhou Zhao1, Haoyu Chen1, Chunjiang Liu1, Zhenyang Li2, Charles Herrmann3, Junhwa Hur3, Yinxiao Li3, Ming‑Hsuan Yang4, Bhiksha Raj1, Min Xu1*
1Carnegie Mellon University 2University of Alabama at Birmingham 3Google 4UC Merced
MASIV determines object geometry and governing physical laws from videos in a material-agnostic manner.
- Code release
- MASIV Multi-Sequence Dataset release
We test our code on Ubuntu 22.04, CUDA Toolkit 12.4, NVIDIA A100 80G.
git https://github.com/Skaldak/gic/tree/refactor/cleanup
cd masiv
conda env create --file environment.yml
conda activate masivWe validated our algorithm using the PAC-NeRF dataset and the Spring-Gaus dataset. The structure of the dataset and model files is as follows:
├── dataset
│ | PAC-NeRF-Data
│ | data
│ ├── bird
│ ├── cat
│ ├── ...
│ | Spring-Gaus
│ | mpm_synthetic
│ | render
│ ├── apple
│ ├── cream
│ ├── ...
│ | real_capture
│ ├── bun
│ ├── burger
│ ├── ...
│ | Genesis
│ | multi_sequence
│
We introduce a novel high-quality multi-view and multi-sequence dataset built on Genesis. The dataset features multi-view and multi-sequence recordings, capturing diverse material behaviors across 10 object geometries and 5 material types. Each combination is recorded from multiple synchronized camera viewpoints, providing coverage for both dynamic reconstruction and material identification learning. Our data is available on Hugging Face.
If you're running for the first time, preprocess the dataset by segmenting foreground from background with:
python prepare_pacnerf_data.py --data_folder=PAC-NeRF-Data/data/*
We used the same segmentation model as PAC-NeRF for segmentation.
python train_dynamic.py --config_path ${CONFIG_FILE} -source_path ${DATA_DIR} -model_path ${MODEL_DIR}
-c, --config_pathconfig file path-s, --source_pathdata path-m, --model_pathmodel path
Optional:
--reg_scaleenabling regularization on Gaussian scaling parameters--reg_alphaenabling regularization on rendered opacity parameterenv.pretrainsetting pretrained nclaw model, e.g.env.pretrain=jellysim.center sim.sizesetting simulation area, e.g.sim.center=2.0 sim.size=4.0
python train_dynamic.py --config_path config/pacnerf/elastic/default.yaml --source_path data/PAC-NeRF-Data/data/elastic/0 --model_path output/pacnerf/elastic/0 --reg_scale --reg_alpha env.pretrain=jelly sim.center=2.0 sim.size=4.0
python train_dynamic.py --config_path config/spring_gaus/benchmark/apple.yaml --source_path data/Spring-Gaus/mpm_synthetic/render/apple --model_path output/spring_gaus/benchmark/apple --reg_scale --reg_alpha env.pretrain=plasticine sim.center=1.5 sim.size=4.0
python train_dynamic.py --config_path config/spring_gaus/real_capture/bun.yaml --source_path data/Spring-Gaus/real_capture/bun --model_path output/spring_gaus/real_capture/bun --reg_scale --reg_alpha --reg_init env.pretrain=plasticine meta.w_velo=0.1 meta.w_traj=1.0
python train_dynamic.py --config_path config/genesis/multi_sequence/default.yaml --source_path data/Genesis/multi_sequence/0_0 --model_path output/genesis/multi_sequence/0_0 --reg_scale --reg_alpha env.pretrain=plasticine sim.center=1.0 sim.size=2.0
Use --model_path to specify the directory where the trained model and intermediate states will be saved.
After training, you may do future state prediction and evaluation with this command:
python predict.py --config_path config/pacnerf/{material}/default.yaml --source_path data/PAC-NeRF-Data/data/{material}/{num_id} --model_path output/pacnerf/{material}/{num_id} --gt_path data/PAC-NeRF-Data/simulation_data/{material}/{num_id} --reg_scale --reg_alpha env.pretrain=jelly sim.center=2.0 sim.size=4.0 --load_iter 1000 --iteration 40000
torchrun --nproc-per-node=8 run.py train_dynamic --config_path config/pacnerf --source_path data/PAC-NeRF-Data/data --model_path output/PAC-NeRF-Output --gt_path data/PAC-NeRF-Data/simulation_data --reg_scale --reg_alpha env.pretrain=jelly sim.center=2.0 sim.size=4.0 --subfolder
To show that our model is capable of robustly generalizing to novel conditions, we provide this script, where novel material properties, varied initial velocities, and different external force fields could be specified to the trained model.
python generalize.py --config_path config/pacnerf/{material}.yaml --source_path data/PAC-NeRF-Data/data/{material} --model_path output/pacnerf/{material} --novel_material output/pacnerf/{novel_material} --novel_gravity -5.0, -2.5, 0.0 --novel_velocity 1.0, 0.5, 0.0 --reg_scale --reg_alpha env.pretrain=jelly sim.center=2.0 sim.size=4.0 --load_iter 1000
Arguments:
--novel_material: setting novel material--novel_gravity: setting novel gravity magnitude and direction--novel_velocity: setting novel initial velocity
This script opens an Open3D window showing model’s trajectory lines:
python visualize.py --split train --config_path config/pacnerf/{material}.yaml --source_path data/PAC-NeRF-Data/data/{material} --model_path output/pacnerf/{material} --trajectory mpm sim.center=2.0 sim.size=4.0
Arguments:
--split:trainortest--trajectory: setting visualization mode (deformablefor Gaussian reconstruction andmpmfor MPM simulation)
We sincerely thank the authors of Dynamic 3D Gaussians, PAC-NeRF, Spring-Gaus, and GIC whose codes and datasets were used in our work.
If you find this project helpful for your research, please consider citing the following BibTeX entry:
@article{zhao2025masiv,
title={MASIV: Toward Material-Agnostic System Identification from Videos},
author={Zhao, Yizhou and Chen, Haoyu and Liu, Chunjiang and Li, Zhenyang and Herrmann, Charles and Hur, Junhwa and Li, Yinxiao and Yang, Ming-Hsuan and Raj, Bhiksha and Xu, Min},
journal={arXiv preprint arXiv:2508.01112},
year={2025}
}