Thanks to visit codestin.com
Credit goes to github.com

Skip to content

dazazh/CS7303

 
 

Repository files navigation

FDE: Full-scene Depth Estimation including transparent objects

Installation

This code is tested with Ubuntu 16.04, Python3.6 and Pytorch 1.3, and CUDA 9.0.

System Dependencies

sudo apt-get install libhdf5-10 libhdf5-serial-dev libhdf5-dev libhdf5-cpp-11
sudo apt install libopenexr-dev zlib1g-dev openexr
sudo apt install xorg-dev  # display widows
sudo apt install libglfw3-dev

Setup

  1. Clone the repository. A small sample dataset of 3 real and 3 synthetic images is included.

    git clone <current-branch>
  2. Install pip dependencies by running in terminal:

    pip install -r requirements.txt
  3. Download the data:
    a) 数据集存储路径:

    b) Model Checkpoints (0.9GB) - (包含了masks、boundary和surface normal原模型) checkpoints位置在三个模型文件夹下的config目录中设置

  4. Compile depth2depth (global optimization):

    depth2depth is a C++ global optimization module used for depth completion, adapted from the DeepCompletion project. It resides in the api/depth2depth/ directory.

    • To compile the depth2depth binary, you will first need to identify the path to libhdf5. Run the following command in terminal:

      find /usr -iname "*hdf5.h*"

      Note the location of hdf5/serial. It will look similar to: /usr/include/hdf5/serial/hdf5.h.

    • Edit BOTH lines 28-29 of the makefile at api/depth2depth/gaps/apps/depth2depth/Makefile to add the path you just found as shown below:

      USER_LIBS=-L/usr/include/hdf5/serial/ -lhdf5_serial
      USER_CFLAGS=-DRN_USE_CSPARSE "/usr/include/hdf5/serial/"
    • Compile the binary:

      cd api/depth2depth/gaps
      export CPATH="/usr/include/hdf5/serial/"  # Ensure this path is same as read from output of `find /usr -iname "*hdf5.h*"`
      
      make

      This should create an executable, api/depth2depth/gaps/bin/x86_64/depth2depth. The config files will need the path to this executable to run our depth estimation pipeline.

    • Check the executable, by passing in the provided sample files:

      cd api/depth2depth/gaps
      bash depth2depth.sh

      This will generate gaps/sample_files/output-depth.png, which should match the expected-output-depth.png sample file. It will also generate RGB visualizations of all the intermediate files.

To run the code:

Training Code

The folder pytorch_networks/ contains the code used to train the surface normals, occlusion boundary and semantic segmentation models.

  • Go the to respective folder (eg: pytorch_networks/surface_normals) and create a local copy of the config file:

    cp config/config.yaml.sample config/config.yaml
  • Edit the config.yaml file to fill in the paths to the dataset, select hyperparameter values, etc. All the parameters are explained in comments within the config file.

  • Start training:

    python train.py -c config/config.yaml
  • Eval script can be run by:

    python eval.py -c config/config.yaml

About

图像处理与机器视觉

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 52.7%
  • Python 23.4%
  • C++ 22.7%
  • SWIG 0.6%
  • Shell 0.4%
  • Makefile 0.1%
  • CMake 0.1%