Mentions légales du service

Skip to content
Snippets Groups Projects

Neural Point Catacaustics for Novel-View Synthesis of Reflections

Georgios Kopanas, Thomas Leimkühler, Gilles Rainer, Clément Jambon, George Drettakis
| Webpage | Full Paper | Comparisons | Video |
Teaser image

Abstract: View-dependent effects such as reflections pose a substantial challenge for image-based and neural rendering algorithms. Above all, curved reflectors are particularly hard, as they lead to highly non-linear reflection flows as the camera moves. We introduce a new point-based representation to compute Neural Point Catacaustics allowing novel-view synthesis of scenes with curved reflectors, from a set of casually-captured input photos. At the core of our method is a neural warp field that models catacaustic trajectories of reflections, so complex specular effects can be rendered using efficient point splatting in conjunction with a neural renderer. One of our key contributions is the explicit representation of reflections with a reflection point cloud which is displaced by the neural warp field, and a primary point cloud which is optimized to represent the rest of the scene. After a short manual annotation step, our approach allows interactive high-quality renderings of novel views with accurate reflection flow. Additionally, the explicit representation of reflection flow supports several forms of scene manipulation in captured scenes, such as reflection editing, cloning of specular objects, reflection tracking across views, and comfortable stereo viewing.

Install

Requirements:

  • Windows and Linux
  • A GPU with at least 12GB of memory, for training we recommend at least 24GB.
  • Cuda 11.3 Toolkit
  • Pytorch 1.12

Open your favorite terminal and run:

git clone https://gitlab.inria.fr/gkopanas/neural-catacaustics.git $REPO_PATH
cd $REPO_PATH
conda create --name neural_catacaustics python=3.8
pip install pytorch==1.12.0 torchvision==0.13.0 cudatoolkit=11.3 -c pytorch
pip install -r ./requirements.txt

For Windows users the next steps should be done in a Developer Command Prompt for VS

conda activate F:\gkopanas\python_envs\catacaustics_release
cd $REPO_PATH/diff_rasterization/
python setup.py install

Prepare your own scene

(Coming Soon)

Prepared Scenes

The format of the folders is made to accomodate this code-base. If you want to use this dataset in different code bases (i.e NeRF) you need to use the "cropped_train_cameras" as a starting point.


Compost
Scene Files
Trained Model

Concave Bowl
Scene Files
Trained Model

Crazy Blade
Scene Files
Trained Model

Hallway Lamp
Scene Files
Trained Model

Multi Bounce
Scene Files
Trained Model

Silver Vase
Scene Files
Trained Model

Watering Can
Scene Files
Trained Model

Train a Scene from Scratch

Download the "Scene Files" of the scens you are interested in and unzip them in $SCENES_FOLDER

python train_global_pc.py -i $SCENES_FOLDER/concave_bowl2/rcScene/ -o ./final_config/concave_bowl2/viewspacedense_
python train_global_pc.py -i $SCENES_FOLDER/wateringcan2/rcScene/ -o ./final_config/wateringcan2/viewspacedense_
python train_global_pc.py -i $SCENES_FOLDER/multibounce/rcScene/ -o ./final_config/multibounce/viewpsacedense_
python train_global_pc.py -i $SCENES_FOLDER/crazy_blade2/rcScene/ -o ./final_config/crazy_blade2/viewspacedense_
python train_global_pc.py -i $SCENES_FOLDER/compost/rcScene/ --diffuse_xyz_lr 0.0005 -o ./final_config/compost/viewspacedense_xyz0.0005_
python train_global_pc.py -i $SCENES_FOLDER/hallway_lamp/rcScene --densify_grad_threshold 0.0004 -o ./final_config/hallway_lamp/viewspacedense_grad0.0004_
python train_global_pc.py -i $SCENES_FOLDER/silver_vase2/rcScene/ --densify_grad_threshold 0.0004 --lambda_tv 0.0 --lamda_specular 0.001 -o ./final_config/silver_vase2/viewspacedense_grad0.0004_

Render Paths

To re-render the paths used in all the communications of this paper you need to download both the "Scene Files" and the "Trained Model" in $SCENES_FOLDER and $TRAINED_MODELS.

conda activate F:\gkopanas\python_envs\catacaustics_release 
python test_path.py -i $SCENES_FOLDER/compost/rcScene --scene_representation_folder $TRAINED_MODELS/compost/viewspacedense_xyz0.0005_12479408
python test_path_slow.py -i $SCENES_FOLDER/compost/rcScene --scene_representation_folder $TRAINED_MODELS/compost/viewspacedense_xyz0.0005_12479408
python test_path_synthetic.py -i $SCENES_FOLDER/compost/rcScene --scene_representation_folder $TRAINED_MODELS/compost/viewspacedense_xyz0.0005_12479408

The output renders are saved in $TRAINED_MODELS/[scene_representation_folder]/[script_name]_renders

Interactive Viewer

For the viewer you need to install pyopengl, in Windows in can be a bit tricky, check this link: https://stackoverflow.com/questions/59725675/need-to-install-pyopengl-windows

For the best quality possible while rendering interactively please uncomment the right parameter based on the scene here: https://gitlab.inria.fr/gkopanas/neural-catacaustics/-/blob/main/utils/shaders/visibility_splat.vert#L108

python viewer.py -i $SCENES_FOLDER/compost/rcScene --scene_representation_folder $TRAINED_MODELS/compost/viewspacedense_xyz0.0005_12479408

You can press h to print help in the console.