Multi-View Reconstruction using Signed Ray Distance Functions (SRDF)
This repository is the official Pytorch implementation of the paper Multi-View Reconstruction using Signed Ray Distance Functions (SRDF).
If you find this project useful for your research, please cite:
@article{zins2022multi,
title={Multi-View Reconstruction using Signed Ray Distance Functions (SRDF)},
author={Zins, Pierre and Xu, Yuanlu and Boyer, Edmond and Wuhrer, Stefanie and Tung, Tony},
journal={arXiv preprint arXiv:2209.00082},
year={2022}
}
Requirements
conda create -n srdf python=3.7
conda activate srdf
conda install -c conda-forge pyembree embree=2.17.7
pip install -r requirements.txt
PATH=/usr/local/cuda-11.2/bin:$PATH pip install pycuda
Baseline prior
We used commercial data from Renderpeople. Here, we demonstrate the code using the free samples from Renderpeople: https://renderpeople.com/fr/free-3d-people/
Download and extract all subjects in ./data/rp_free_posed_people_OBJ/
These free meshes are rotated compared to our Renderpeople meshes. Just remove this rotation (with Meshlab for example) by rotating them by -90 degrees around the y-axis.
Compute Spherical Harmonics coefficients:
python render/prt_utils.py -i ./data/rp_free_posed_people_OBJ/rp_dennis_posed_004_OBJ/ -n 40
Render dataset:
python render/render_data_srdf.py -i ./data/rp_free_posed_people_OBJ/rp_dennis_posed_004_OBJ/ -o ./data/dataset_RP -p persp -e -s 2048
Compute Visual Hull:
python ./src/visual_hull.py -d ./data/dataset_RP/ -s rp_dennis_posed_004 -dt RP -o mesh_vh
Compute initial depthmaps:
python ./src/compute_depthmaps.py -d data/dataset_RP/ -s rp_dennis_posed_004 -o depthmaps_init_1 -m mesh_vh --override -dt RP
Run the optimization
# Edit the scripts ./configs/conf_rp.yaml and ./src/run_optimization_rp.sh and run:
sh ./src/run_optimization_rp.sh
Prepare data for TSDF fusion
# Edit the parameters in script ./src/reconstruct.sh, and run:
sh ./src/reconstruct.sh rp_dennis_posed_004 9999 depthmaps_optimized
TSDF fusion
PATH=/usr/local/cuda-11.2/bin:$PATH python ./src/tsdf_fusion_rp.py -i data/dataset_RP/tsdf_fusion_data/ -o data/rp_dennis_posed_004.ply
Cleaning
python ./src/clean_mesh_trimesh.py -i ./data/rp_dennis_posed_004.ply -d ./data/dataset_RP/ -s rp_dennis_posed_004 -dt RP
The output mesh is located in ./data/rp_dennis_posed_004_cleaned.ply
Learned Prior
DTU
Create DTU dataset
# Edit the parameters inside the script and run:
python ./src/create_dtu_dataset.py
Download the initial depthmaps, and extract them into ./data/dataset_dtu/depthmaps_init_1
Download the pre-trained model, and place it ./checkpoints/
Run the optimization
# Edit the scripts ./configs/conf_dtu.yaml and ./src/run_optimization_dtu.sh and run:
sh ./src/run_optimization_dtu.sh
Prepare data for TSDF fusion and pointcloud fusion
# Edit the parameters inside the script ./src/reconstruct.sh, and run:
sh ./src/reconstruct.sh scan083 9999 depthmaps_optimized/1
Pointlcoud fusion
python ./src/pointcloud_fusion.py -d ./data/dataset_dtu/ -i ./data/dataset_dtu/depthmaps_optimized/1/scan083/9999/ -dt DTU
python ./src/clean_pcd_trimesh.py -i ./data/dataset_dtu/depthmaps_optimized/1/scan083/9999/fused.ply -d ./data/dataset_dtu/ -dt DTU -s scan083
Evaluation
python ./src/eval.py --data ./data/dataset_dtu/depthmaps_optimized/1/scan083/9999/fused.ply --mode pcd --scan 83 --dataset_dir /disk/data/DTU/ --vis_out_dir ./data/eval
TSDF fusion
PATH=/usr/local/cuda-11.2/bin:$PATH python ./src/tsdf_fusion.py -i data/dataset_dtu/tsdf_fusion_data/ -o data/scan083.ply
Cleaning
python ./src/clean_mesh_trimesh.py -i ./data/scan083.ply -d ./data/dataset_dtu -s scan083 -dt DTU
The output mesh is located in ./data/scan083_cleaned.ply
BlendedMVS
Download BlendedMVS examples and extract them in ./data/dataset_blendedmvs/depthmaps_init_1
Download the pre-trained model, and place it ./checkpoints/
Run optimization
# Edit the scripts ./configs/conf_blendedmvs.yaml and ./src/run_optimization_blendedmvs.sh and run:
sh ./src/run_optimization_blendedmvs.sh
Prepare data for pointcloud fusion
# Edit the parameters inside the script ./src/reconstruct.sh, and run:
sh ./src/reconstruct.sh scene_51 99990 depthmaps_optimized/1
Pointcloud fusion
python ./src/pointcloud_fusion.py -d ./data/dataset_blendedmvs/ -i ./data/dataset_blendedmvs/depthmaps_optimized/1/scene_51/99990/ -dt BlendedMVS --pp
Compute pointcloud normals
python ./src/pcd_to_mesh_open3d.py -i ./data/dataset_blendedmvs/depthmaps_optimized/1/scene_51/99990/fused.ply
Poisson reconstruction using https://github.com/mkazhdan/PoissonRecon
# Edit the script to put the path to PoissonRecon
python ./src/run_poisson.py -i data/dataset_blendedmvs/depthmaps_optimized/1/scene_51/99990/fused_nmls.ply --out data/dataset_blendedmvs/depthmaps_optimized/1/scene_51/99990/mesh.ply --threads 16 --depth 13 --trim 7
The output mesh is located in ./data/dataset_blendedmvs/depthmaps_optimized/1/scene_51/99990/mesh_trimmed.ply
Cleaning using Taubin smoothing from Meshab
Re-train the photoconsistency network
Download the DTU original dataset (Rectified, SampleSet, Points) from: DTU
Compute the Poisson reconstruction of the STL reference pointclouds, using Poisson Reconstruction
mkdir /disk/data/DTU/Surfaces/spsr -p
# Edit the parameters inside the script and run:
python ./src/reconstruct_dtu_meshes.py -i /disk/data/DTU/Points/stl/ -o /disk/data/DTU/Surfaces/spsr
# Edit the parameters and run
python ./src/create_dtu_dataset.py
Compute ground truth depthmaps (for each scan)
# Original resolution
python ./src/compute_depthmaps.py -d data/dataset_dtu_full/ -s scan001 -o depthmaps_gt_1 -m mesh_gt --override -dt DTU
# Lower resolution
python ./src/compute_depthmaps.py -d data/dataset_dtu_full/ -s scan001 -o depthmaps_gt_0.5 -m mesh_gt --override -dt DTU --rescale 0.5
Train
# Original resolution
python ./src/train_photo.py -n model_dtu -d ./data/dataset_dtu_full/ --num_rays 5000 --num_workers 12 --freq_val 5 --freq_log 5 --num_epoch 15000 --num_samples 2 -dt DTU -ol 2 -oh 100 --depthmaps_gt depthmaps_gt_1
# Lower resolution
python ./src/train_photo.py -n model_dtu_low_res -d data/dataset_dtu_full --num_rays 5000 --num_workers 12 --freq_val 5 --freq_log 5 --num_epoch 15000 --num_samples 2 -dt DTU -ol 2 -oh 100 --depthmaps_gt depthmaps_gt_0.5 --rescale 0.5
# Continue from checkpoint
python ./src/train_photo.py -n exp_dtu -d data/dataset_dtu_full/ --num_rays 5000 --num_workers 12 --freq_val 5 --freq_log 5 --num_epoch 15000 --num_samples 2 -dt DTU -ol 2 -oh 100 --depthmaps_gt depthmaps_gt_1 --load_checkpoint ./logs/model_dtu/version_0/checkpoints/last.ckpt
# See training logs
tensorboard --logdir ./logs/
-
-ol
andoh
control the range [low_offset, high_offset] used to select negative samples with respect to the surface along the camera ray. -
-ol
can be set to 2 for the full training -
-oh
can be set to 100 at the beginning of the training to make the classification task easier with negative samples quite far from the surface. Then it can be reduced to 50 and 20 to make the classification harder and the network better.
Acknowledgement
Some scripts from this repository are based on code from PIFu (rendering of Renderpeople subjects), Volumetric TSDF Fusion of RGB-D Images in Python (TSDF fusion code) and DTU Eval Python (evaluation).
We thank the authors for sharing their code.