Hybrid Image-based Rendering for Free-view Synthesis {#hybrid_ibrPage}
Introduction
This Project contains the implementations of:
[Prakash et al. 21] Hybrid Image-based Rendering for Free-view Synthesis, http://www-sop.inria.fr/reves/Basilic/2021/PLRD21/ ; Inria project page: https://repo-sam.inria.fr/fungraph/hybrid-ibr/)
If you use the code, we would greatly appreciate it if you could cite the corresponding papers:
@Article{PLRD21,
author = "Prakash, Siddhant and Leimk{\"u}hler, Thomas and Rodriguez, Simon and Drettakis, George",
title = "Hybrid Image-based Rendering for Free-view Synthesis",
journal = "Proceedings of the ACM on Computer Graphics and Interactive Techniques",
number = "1",
volume = "4",
month = "May",
year = "2021",
url = "http://www-sop.inria.fr/reves/Basilic/2021/PLRD21"
}
and the sibr system:
@misc{sibr2020,
author = "Bonopera, Sebastien and Hedman, Peter and Esnault, Jerome and Prakash, Siddhant and Rodriguez, Simon and Thonat, Theo and Benadel, Mehdi and Chaurasia, Gaurav and Philip, Julien and Drettakis, George",
title = "sibr: A System for Image Based Rendering",
year = "2020",
url = "https://sibr.gitlabpages.inria.fr/"
Authors
Siddhant Prakash, Thomas Leimkühler, Simon Rodriguez, and George Drettakis, presented at ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D) 2021.
How to use
The code has been developed for Windows (10), and we currently only support this platform. A Linux version will follow shortly.
Use the binary distribution
The easiest way to use SIBR to run [Prakash 21] is to download the binary distribution. All steps described below, including all preprocessing for your datasets will work using this code. Download the distribution from the page: https://sibr.gitlabpages.inria.fr/download.html (Hybrid IBR, ???Mb); unzip the file and rename the directory "install". To run correctly, you will need to have CUDA 10.1 installed and latest Nvidia drivers.
Checkout the code
Check out the sibr core code and also:
- clone sibr_core along with fribr_framework, and hybrid_ibr in projects:
- this repo: hybrid_ibr https://gitlab.inria.fr/sibr/projects/hybrid_ibr
- fribr: https://gitlab.inria.fr/sibr/fribr_framework which contains the basic fribr code components
For this use the following commands:
## through HTTPS
git clone https://gitlab.inria.fr/sibr/sibr_core.git
## through SSH
git clone git@gitlab.inria.fr:sibr/sibr_core.git
Then go to src/projects and clone the other projects:
## through HTTPS
git clone https://gitlab.inria.fr/sibr/projects/hybrid_ibr.git
git clone https://gitlab.inria.fr/sibr/fribr_framework.git
## through SSH
git clone git@gitlab.inria.fr:sibr/projects/hybrid_ibr.git
git clone git@gitlab.inria.fr:sibr/fribr_framework.git
You need to install SIBR first, to run this code base. See also the general installation instructions for sibr.
Build & Install
You can build and install hybrid_ibr via running ALL_BUILD and/or INSTALL in sibr_projects.sln solution (as mentioned in Compiling or through hybrid_ibr*
specific targets in sibr_projects.sln solution.
Dont forget to build INSTALL if you use ALL_BUILD.
Running the renderer & comparisons
We provide three apps for easy visualization of the algorithms described in [Prakash 21]:
- hybrid_render_app - To run the hybrid algorithm described in the paper
- pvm_render_app - To run the PVM blending algorithm described in the paper
- hybrid_compare_app - A viewer to compare between Textured Mesh, ULR, and Hybrid algorithms.
After installing the Projects above, you can run the corresponding renderer as follows:
./hybrid_render_app_rwdi.exe --path PATH_TO_DATASET --texture-width 1920 --rendering-size 1280 720
./pvm_render_app_rwdi.exe --path PATH_TO_DATASET --texture-width 1920 --rendering-size 1280 720
./hybrid_compare_app_rwdi.exe --path PATH_TO_DATASET --texture-width 1920 --rendering-size 1280 720
Our interactive viewer has a main view running the algorithm, a per-view mesh debug view for visualizing the grid and selected voxels, and a top view to visualize the position of the calibrated cameras. By default, the debug views (top view and per-view mesh debug view) are inactive. You can activate them using the views menu. By default you are in WASD mode, and can toggle to trackball using the "y" key. Please see the page on Interface for more details on the interface.
To download the the datasets compatible with the viewers, please see Datasets below.
Playing paths from the command line
Paths can be played by the renderer by running the renderer in offscreen mode:
./hybrid_render_app_rwdi.exe [OTHER OPTIONS] --offscreen --pathFile path.(out|lookat|tst|path) [--outPath optionalOutputPath --noExit]
By default, the application exits when this operation is performed. This is the easiest way to compare algorithms. All camera paths used in the supplemental video for each scene are added to the datasets page. Again, please see Datasets below.
For example, the camera path for the Ponche scene can be found here: supplemental.path.
Bugs and Issues
We will track bugs and issues through the Issues interface on gitlab. Inria gitlab does not allow creation of external accounts, so if you have an issue/bug please email sibr@inria.fr
and we will either create a guest account or create the issue on our side.
Datasets
We provide pre-processed datasets that you can run directly with the renderer on the project website. We also provide utilities to create your own dataset from images using the process described below ("Preprocessing to create a new dataset from your images").
Note: Preprocessing large datasets or datasets with large images can be slow and require a lot of memory. We recommend using images with width < 2500 pixels. For datasets with more than ~50 images, 64Gb of RAM may be required.
Example Datasets
Some example datasets can be found here: https://repo-sam.inria.fr/fungraph/hybrid-ibr/datasets/
These datasets have been generated using Colmap, RealityCapture, and preprocessed with the current version of our tools you can find here. These datasets have the output_hybrid.pcloud file precomputed so you can directly use the renderer for [Prakash 21]. You can download the Ponche dataset here:
https://repo-sam.inria.fr/fungraph/hybrid-ibr/datasets/Ponche/Ponche.zip
If you unzipped to datasets\Ponche
in install\bin
you can run [Prakash 21] by going to the bin
folder and running the following command
./hybrid_render_app_rwdi.exe --path datasets/Ponche --texture-width 1920 --rendering-size 1280 720
Several other datasets are available on the same page, as well as for other algorithms.
Preprocessing to create a new dataset from your images
If you already have a SIBR calibrated scene with colmap as described here, you can run the hybrid preprocess script to create the hybrid dataset containing harmonized images and the output_hybrid.pcloud required for running the renderer. Use the folowing command to create the dataset:
python ibr_hybrid_preprocess.py -r -i PATH_TO_DATASET [ -w 1920 ]
-r: project compiled in RelWithDebInfo mode
-i: path to dataset [ROOT] folder
-w: (optional) parameter for maximum width on which the data should be preprocessed.
For preprocessing from scratch, you will need COLMAP (https://colmap.github.io/ version 3.6) and Meshlab (https://www.meshlab.net/) installed (ATTN: Version 2020.07 only). Please note that the method we used in the original paper (i.e., use COLMAP calibration to create a RealityCApture MVS mesh) is no longer available in Reality Capture.
We provide a script fullColmapPreprocess.py that requires a directory images containing the multi-view dataset. The script does camera calibrationa and MVS using COLMAP to create aSIBR Scene.
By default script will run COLMAP, create the Delaunay mesh, and then run the per-view mesh refinement using this mesh instead of Reality Capture. With the option --withRC it expects the Reality Capture mesh and calibration in a directory sfm_mvs_rc (see section "Creating Dataset with RealityCapture" in the documentation), aligns the meshes, then also runs the preprocessing.
To run the script, create your dataset directory PATH_TO_DATASET, and put your input images in PATH_TO_DATASET/images, go to install/scripts and run:
python fulColmapPreprocess.py --path PATH_TO_DATASET --colmapPath PATH_TO_COLMAP_EXECUTABLES [--meshsize (200|250|300|350|400) ]
where PATH_TO_COLMAP_EXECUTABLES is the directory containing COLMAP.bat.
If you have followed the steps in "Creating Dataset with RealityCapture", and placed the output in director PATH_TO_RC you can run the command:
python fullColmapPreprocess.py --path PATH_TO_DATASET --withRC --RCPath PATH_TO_RC --colmapPath PATH_TO_COLMAP_EXECUTABLES
Note that you must follow the instructions for saving RealityCapture data strictly: i.e., the textured mesh must be saved as textured.obj, and the calibration bundle.out.
The directory structure should look like this after the preprocess:
[ROOT]/hybrid/output_hybrid.pcloud
[ROOT]/hybrid/harmonized_images/*.jpg
[ROOT]/hybrid/harmonized_images/mesh_perVertexVariance.ply
[ROOT]/hybrid/harmonized_images/specTexture_u1_v1.png
[ROOT]/deep_blending/depthmaps/*.bin
[ROOT]/deep_blending/pvmeshes/*.ply
[ROOT]/deep_blending/nvm/depthmaps/*.bin
[ROOT]/deep_blending/nvm/images/*.jpg
[ROOT]/deep_blending/nvm/scene.nvm
[ROOT]/capreal/mesh.obj (and associated texture files)
[ROOT]/capreal/mesh.ply
[ROOT]/capreal/undistorted/*.jpg
[ROOT]/capreal/undistorted/*_P.txt
[ROOT]/colmap/database.db
[ROOT]/colmap/stereo/run-colmap-geometric.sh
[ROOT]/colmap/stereo/run-colmap-photometric.sh
[ROOT]/colmap/stereo/images/*.jpg
[ROOT]/colmap/stereo/sparse/cameras.txt
[ROOT]/colmap/stereo/sparse/images.txt
[ROOT]/colmap/stereo/sparse/points3D.txt
[ROOT]/colmap/stereo/stereo/depth_maps/*.jpg.photometric.bin
[ROOT]/colmap/stereo/stereo/depth_maps/*.jpg.geometric.bin
[ROOT]/colmap/stereo/stereo/normal_maps/*.jpg.photometric.bin
[ROOT]/colmap/stereo/stereo/normal_maps/*.jpg.geometric.bin
[ROOT]/colmap/stereo/stereo/consistency_graphs/*.jpg.photometric.bin
[ROOT]/colmap/stereo/stereo/consistency_graphs/*.jpg.geometric.bin
[ROOT]/colmap/stereo/stereo/refined_depth_maps/*.jpg.photometric.bin
[ROOT]/colmap/stereo/stereo/refined_depth_maps/*.jpg.geometric.bin
[ROOT]/images/*.jpg
Hybrid IBR preprocessing
``Under the hood'' Details
Color Harmonization
The first step in ibr_hybrid_preprocess.py calls the color_harmonization app.
If you only wish to perform color harmonization you can follow the following steps:
Go to the install/bin folder and run the following command:
./color_harmonize_rwdi.exe --path <path/to/dataset/root>
This shoould be enough if you just want to use the default mesh.
The harmonized images will be saved in <path/to/dataset/root>/hybrid/harmonized_images/
folder
If you want to use a custom mesh, use the following command:
./color_harmonize_rwdi.exe --path <path/to/dataset/root> --mesh <path/to/custom/mesh/> --texture <path/to/corresponding/mesh/texture>
The harmonized images will be saved in <path/to/dataset/root>/hybrid/harmonized_images/
folder
Per-view Mesh Generation
The second step runs the per-view mesh refinement as described in the paper [Hedman 18] and save the per-view meshes in <path/to/dataset/root>/deep_blending
folder.
The script first runs recon_cmd_rwdi.exe and depthmapmesher_rwdi.exe to create the per-view meshes.
Finally the script runs patchCloud_generator_rwdi.exe to generate the pcloud and stores it as <path/to/dataset/root>/hybrid/output_hybrid.pcloud
.