Mentions légales du service

Skip to content
Snippets Groups Projects

Point Based Neural Rendering with Per-View Optimization {#pointbasedNeuralRenderingPage}

Georgios Kopanas, Julien Philip, Thomas Leimkühler, George Drettakis
| Webpage | Full Paper | Comparisons | Video | Presentation |
Teaser image

Abstract: There has recently been great interest in neural rendering methods. Some approaches use 3D geometry reconstructed with Multi-View Stereo (MVS) but cannot recover from the errors of this process, while others directly learn a volumetric neural representation, but suffer from expensive training and inference. We introduce a general approach that is initialized with MVS, but allows further optimization of scene properties in the space of input views, including depth and reprojected features, resulting in improved novel-view synthesis. A key element of our approach is our new differentiable point-based pipeline, based on bi-directional Elliptical Weighted Average splatting, a probabilistic depth test and effective camera selection. We use these elements together in our neural renderer, that outperforms all previous methods both in quality and speed in almost all scenes we tested. Our pipeline can be applied to multi-view harmonization and stylization in addition to novel-view synthesis

Content

Our implementation consists of:

  1. A python module used for training
  2. A plugin project on top of SIBR platform that pre-processes the scene and renders interactively the results of the training.

Recommended Setup

  • This is tested only on Windows 10 platform
  • A GPU with at least 12GB of memory for small scenes and 24GB for larger scenes, drivers, Cuda 10.0 Toolkit, and cuDNN 7.5
  • Python 3.8 64-bit
  • Anaconda3 virtual environment
  • Cmake at least 3.18.0
  • Visual Studio 16 2019 Community

For preparing scenes from scratch:

Compile SIBR

This is a short step-by-step guide to compile SIBR with all the necessary projects. For the full SIBR documentation please visit here

Compiling with scene preprocessing:

cd PROJECT_DIR
git clone https://gitlab.inria.fr/sibr/sibr_core.git
cd sibr_core/src/projects/
git clone https://gitlab.inria.fr/sibr/projects/torchgl_interop.git
git clone https://gitlab.inria.fr/gkopanas/pointbased_neural_rendering.git
git clone https://gitlab.inria.fr/sibr/projects/fribr_framework.git
git clone https://gitlab.inria.fr/sibr/projects/tfgl_interop.git
git clone https://gitlab.inria.fr/sibr/projects/inside_out_deep_blending.git
cd torchgl_interop
git checkout origin/pbnr -b pbnr
cd ../../../
cmake.exe -S./ -B./build -DBUILD_DOCUMENTATION:BOOL=ON -DBUILD_IBR_DATASET_TOOLS:BOOL=ON -DBUILD_IBR_POINTBASED_NEURAL_RENDERING:BOOL=ON -DBUILD_IBR_TORCHGL_INTEROP:BOOL=ON -DBUILD_IBR_FRIBR_FRAMEWORK:BOOL=ON -DBUILD_IBR_INSIDE_OUT_DEEP_BLENDING:BOOL=ON -DBUILD_IBR_TFGL_INTEROP:BOOL=ON -G "Visual Studio 16 2019"
cmake --build ./build --target ALL_BUILD --config RelWithDebInfo
cmake --build ./build --target INSTALL --config RelWithDebInfo
cp ./extlibs/libtorch/lib/caffe2_nvrtc.dll ./install/bin/
cp ./extlibs/libtorch/lib/nvrtc* ./install/bin/

Compiling without scene preprocessing:

cd PROJECT_DIR
git clone https://gitlab.inria.fr/sibr/sibr_core.git
cd sibr_core/src/projects/
git clone https://gitlab.inria.fr/sibr/projects/torchgl_interop.git
git clone https://gitlab.inria.fr/gkopanas/pointbased_neural_rendering.git
cd torchgl_interop
git checkout origin/pbnr -b pbnr
cd ../../../
cmake.exe -S./ -B./build -DBUILD_DOCUMENTATION:BOOL=ON -DBUILD_IBR_DATASET_TOOLS:BOOL=ON -DBUILD_IBR_POINTBASED_NEURAL_RENDERING:BOOL=ON -DBUILD_IBR_TORCHGL_INTEROP:BOOL=ON -G "Visual Studio 16 2019"
cmake --build ./build --target ALL_BUILD --config RelWithDebInfo
cmake --build ./build --target INSTALL --config RelWithDebInfo
cp ./extlibs/libtorch/lib/caffe2_nvrtc.dll ./install/bin/
cp ./extlibs/libtorch/lib/nvrtc* ./install/bin/

Training a Scene

Prepare your own scene

Create your dataset directory PATH_TO_DATASET, and put your input images in PATH_TO_DATASET/images, then:

cd PROJECT_DIR/sibr/install/scripts 
python db_dataset_create.py --path PATH_TO_DATASET --colmapPath PATH_TO_COLMAP_EXECUTABLES [--meshsize (200|250|300|350|400)]
cd ../bin/
./SIBR_pbnr_opengl_impl_rwdi.exe --path PATH_TO_DATASET --preprocess_mode
The final outcome in PATH_TO_DATASET should look like this1:
.
+-- capreal
|   +-- undistorted
|   +-- mesh.ply
|   +-- texture.png
+-- colmap
|   +-- sparse
|   +-- stereo
|   |     +-- meshed_delaunay.ply
|   |     +-- sparse
|   |     |     +-- images.txt
|   |     |     +-- cameras.txt
|   +-- database.db
+-- deep_blending
+-- images
|   +-- [img_name#1].png
            .
            .
            .
|   +-- [img_name#N].png
+-- pbnrScene
|   +-- depth_maps_type_2
|   |    +-- [img_name#1].png
|   |    +-- [img_name#1].ts
                .
                .
                .
|   |    +-- [img_name#N].png
|   |    +-- [img_name#N].ts
|   +-- images
|   |    +-- [img_name#1].png
|   |    +-- [img_name#1].ts
                .
                .
                .
|   |    +-- [img_name#N].png
|   |    +-- [img_name#N].ts
|   +-- normal_maps
|   |    +-- [img_name#1].png
|   |    +-- [img_name#1].ts
                .
                .
                .
|   |    +-- [img_name#N].png
|   |    +-- [img_name#N].ts

Create a json file from the following template, we will need it later on as input to the training script.

SCENE_JSON_FILE.json

{
    "scenes":
    [
        {
            "name": "museum",
            "path": "F:/gkopanas/pointbasedIBR/scenes/deep_blending/museum/Museum-1_perview/",
            "type": "Colmap"
        }
    ]
}

1For conciseness we mention only the files that we use, more files and folders exist.

Prepared Scenes

Museum Input Files
Pre-Trained Files
Hugo Input Files
Pre-Trained Files
Ponche Input Files
Pre-Trained Files
Tree Input Files
Pre-Trained Files
Street Input Files
Pre-Trained Files
Pre-Trained Files (for small GPUs)
Stairs Input Files
Pre-Trained Files
Truck Images1

1 We provide only images for Truck since providing the whole scene was not storage efficient. To use this scene you would need to run the pre-processing step locally.

Run Training script

Prepare your Python environment run in a terminal:

conda create --prefix PATH_TO_YOUR_ENV python=3.8
conda activate PATH_TO_YOUR_ENV
-- Follow the instructions to install pytorch 1.9 from https://pytorch.org/get-started/locally/ --
conda install numpy tensorboard

Next we compile our custom differentiable rasterization cuda kernels. Open a Developer Command Prompt from Visual Studio 16 2019 and run:

conda activate PATH_TO_YOUR_ENV
cd PROJECT_DIR\sibr_core\src\projects\pointbased_neural_rendering\pbnr_pytorch\diff_rasterization\
set DISTUTILS_USE_SDK=1
python setup.py install

Now we can run a training session for a scene:

python train_full_pipeline.py -i SCENE_JSON_FILE.json -o run_name

For more details for the JSON file refer to "Create your own Scene" Section

Running the Interactive Renderer

To run the interactive rendering, either wait for the training to finish or download the pre-trained files of a scene.

PROJECT_DIR\sibr_core\install\bin\SIBR_pbnr_opengl_impl_rwdi.exe --path SCENE_PATH --ogl_data_path SCENE_TRAINED_DATA\ogl_data.dat --scene_name [name defined in the json file above] --tensorboard_path SCENE_TRAINED_DATA --iteration 100000  --colmap_fovXfovY_flag --splat_layers 10

To run a path and save the rendering in the disk:

PROJECT_DIR\sibr_core\install\bin\SIBR_pbnr_opengl_impl_rwdi.exe --path SCENE_PATH --pathFile [PATH_FILE.path] --outPath [PATH_TO_OUTPUT_FOLDER] --ogl_data_path SCENE_TRAINED_DATA\ogl_data.dat --scene_name [name defined in the json file above] --tensorboard_path SCENE_TRAINED_DATA --iteration 100000  --colmap_fovXfovY_flag --splat_layers 10