Codes for Point Cloud Completion via Skeleton-Detail Transformer. IEEE Transactions on Visualization and Computer Graphics (TVCG), 2022. See IEEE PDF.
In this work, we present a coarse-to-fine completion framework, which makes full use of both neighboring and long-distance region cues for point cloud completion. Our network leverages a Skeleton-Detail Transformer, which contains cross-attention and self-attention layers, to fully explore the correlation from local patterns to global shape and utilize it to enhance the overall skeleton. Also, we propose a selective attention mechanism to save memory usage in the attention process without significantly affecting performance.
- Python3
- CUDA
- pytorch
- open3d-python
This code is built using Pytorch 1.7.1 with CUDA 10.2 and tested on Ubuntu 18.04 with Python 3.6.
The libs are included under /util, you need to first compile them where there is also a 'Readme.md' in each subfolder.
Download pre-trained models in trained_model folder from Google Drive and put them on trianed_model dir.
For PCN:
- Download ShapeNet test data on Google Drive. Put them on
data/pcnfolder. We use the same testing data in PCN project but we useh5format. - Run
sh test.sh. You should first modify themodel_pathto the folder containing your pre-trained model, anddata_pathto the testing files.
For Completion3D:
- Download the test data on Google Drive or Completion3D. Put them on
data/completion3dfolder. - Run
test_benchmark.shto generate the 'submission.zip' file for Compleiont3D dataset.
For PCN
- The training data are from PCN repository, you can download training (
train.lmdb,train.lmdb-lock) and validation (valid.lmdb,valid.lmdb-lock) data fromshapenetdirectory on the provided training set link in PCN repository. - Run
python create_pcn_h5.pyto generate the training and validation files with.h5format. - Run
sh run.shfor training.
For Compleiont3D:
You can directly download the tranining files from Compleiont3D benchmark. Run sh run.sh and set dataset to Completion3D.
Our codes are partly from ECG, VRCNET. We sincerely thank for their contribution.