Establishing point-to-point correspondences across multiple 3D shapes is a fundamental problem in computer vision and graphics. In this paper, we introduce DcMatch, a novel unsupervised learning framework for non-rigid multi-shape matching. Unlike existing methods that learn a canonical embedding from a single shape, our approach leverages a shape graph attention network to capture the underlying manifold structure of the entire shape collection. This enables the construction of a more expressive and robust shared latent space, leading to more consistent shape-to-universe correspondences via a universe predictor. Simultaneously, we represent these correspondences in both the spatial and spectral domains and enforce their alignment in the shared universe space through a novel cycle consistency loss. This dual-level consistency fosters more accurate and coherent mappings. Extensive experiments on several challenging benchmarks demonstrate that our method consistently outperforms previous state-of-the-art approaches across diverse multi-shape matching scenarios.
# git clone this repository
git clone https://github.com/YeTianwei/DcMatch.git
cd DcMatch
conda create -n DcMatch python=3.8
conda activate DcMatch
# install PyTorch
conda install -c pytorch -c nvidia pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -y
# install PyG
pip install torch-geometric==2.6.1 pip install torch-scatter==2.1.2 torch-sparse==0.6.18 torch-cluster==1.6.3 torch-spline-conv==1.2.2 -f https://data.pyg.org/whl/torch-2.1.0+cu121.html
# install pytorch3d
# if installation fails, refer to https://anaconda.org/channels/pytorch3d/packages/pytorch3d/files
pip install pytorch3d==0.7.8
pip install -r requirements.txtIn addition, this code uses python bindings for an implementation of the Discrete Shell Energy.
Please follow the installation instructions from: Thin shell energy
For training and testing datasets used in this paper, please refer to the ULRSSM repository from Dongliang Cao et al. Please follow the instructions there to download the necessary datasets and place them under ../data/:
├── data
├── FAUST_r
├── FAUST_a
├── SCAPE_r
├── SCAPE_a
├── SHREC19_r
├── TOPKIDS
├── SMAL_r
├── DT4D_rWe thank the original dataset providers for their contributions to the shape analysis community, and that all credits should go to the the respective authors and contributors.
For data preprocessing, we provide preprocess.py to compute all things we need. Here is an example for SMAL_r.
python preprocess.py --data_root ../data/SMAL_r/ --no_normalize --n_eig 200To train a specific model on a specified dataset.
python train.py --opt options/train/smal.yamlYou can visualize the training process in tensorboard or via wandb.
tensorboard --logdir experiments/To test a specific model on a specified dataset.
python test.py --opt options/test/smal.yamlThe qualitative and quantitative results will be saved in results folder.
Make sure to install the latest polyscope to allow headless rendering.
pip uninstall polyscope
pip install git+https://github.com/nmwsharp/polyscope-py.git@102c57f90d8aeb73b869d4dbf2f48f9466e08c00To visualize the final results.
python visualize.py --opt options/test/smal.yamlThe visualized images will be saved in results folder.
You can find all pre-trained models in checkpoints for reproducibility.
The framework implementation is adapted from Hybrid Functional Maps for Crease-Aware Non-Isometric Shape Matching.
The implementation of DiffusionNet is based on the official implementation.
We thank the authors for making their codes publicly available.
Feel free to send us a email ([email protected]) if you have any question regarding the paper or you find any bugs in the implementation.
Please cite our paper when using the code. You can use the following bibtex
@article{ye2025dcmatch,
title={DcMatch: Unsupervised Multi-Shape Matching with Dual-Level Consistency},
author={Ye, Tianwei and Ma, Yong and Mei, Xiaoguang},
journal={arXiv preprint arXiv:2509.01204},
year={2025}
}