Indoor Relighting Setup Instructions
Setup
The software requires windows.
You will need to first install:
- Visual studio 2019
- CmakeGUI 3.23.1 (other versions should work but this one has been tested recently)
- CUDA 10.2 and cuDNN v7.6.5 (both Important)
- Optix 7.0.0 (Important)
- 7zip (make sure it is accessible in your PATH)
Once these are installed, make sure they are accessible in your path. Then you can clone the sibr_core and checkout the right version:
git clone https://gitlab.inria.fr/sibr/sibr_core.git
git checkout tags/0.9.6
You now need to install different submodules on top of this repo you need to install the optix and torchgl_interop projects.
First navigates to the project folder:
cd sibr_core/src/projects
Then:
git clone https://gitlab.inria.fr/sibr/projects/torchgl_interop.git
git clone https://gitlab.inria.fr/sibr/projects/optix.git
git clone https://gitlab.inria.fr/sibr/projects/indoor_relighting.git
Then checkout the correct branch for the two dependent projects:
cd torchgl_interop
git checkout adobejp
cd ../optix
git checkout 8703cca1195a02e4cc44e36cd9a5919b9dd10d44
You should have all the code you need!
Generating the solution
Now open CmakeGUI
Specify the source code path and build path, in my case
D:/Julien/test_install/sibr_core
and D:/Julien/test_install/sibr_core/build
Then press Configure
specifying Visual Studio 16 2019
when prompted.
Configure should work fine. Now you need to also select the projects before generating.
Sort Variables by group
and make sure Advanced
is also selected in the GUI.
Under BUILD
make sure to select:
BUILD_IBR_INDOOR_RELIGHTING
BUILD_IBR_OPTIX
BUILD_IBR_TORCHGL_INTEROP
Configure
again.
Make sure the architecture for CUDA is 7.* To check this, when you configure with cmake you should have a line that starts with: Autodetected CUDA architecture(s): If you have compute capabilities more than 7.* ( for instance 8+): Go to: sibr_core\extlibs\libtorch\share\cmake\Caffe2\Modules_CUDA_fix\upstream\FindCUDA Open select_compute_arch.cmake Replace line 79 which is set(CUDA_LIMIT_GPU_ARCHITECTURE "9.0") by set(CUDA_LIMIT_GPU_ARCHITECTURE "7.5") This should allow the following flags to be passed to NVCC: -gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_75,code=compute_75
Reconfigure and generate the solution!!!
Building
Open the generated sibr_projects.sln
in build/
with visual studio 2019.
At the top put the solution in RelWithDebInfo
mode.
In the solution explorer go to
projects/indoor_relighting/apps
and build SIBR_indoorRelighting_app_install
Testing on readily available data (view synthesis at this stage)
You can download datasets at https://repo-sam.inria.fr/fungraph/deep-indoor-relight/#testdata
Unzip the dataset.
You can now navigate to sibr_core\install\bin
For a quick check that things are working run:
SIBR_indoorRelighting_app_rwdi.exe
--path PATH_TO_EXTRACTED_SCENE\scene_share
--model PATH_TO\sibr_core\src\projects\indoor_relighting\models\trainPatchGanAdam\
--interactive
--outW 128
--texture-width 128
/!\ Make sure the program is using your best GPU. This policy can be explicitely defined in Graphics Settings
in windows where you can decided wich gpu is used by which program.
Problem a) Sometimes some torch related dll are not correctly copied so you might have to copy manually the dlls from sibr_core\extlibs\libtorch\lib
to sibr_core\install\bin
.
The program should run.
Depending on the scene and the capture there might be burnt light sources in the scene. In which case a UI will appear with a Lighting adjustment
panel.
Here you need to adjust the exposure and color (eventually through temperature) of burnt light sources as these are by definition undefined. To help you in this process (which does not have to be perfect) you can visualize images divided by the irradiance computed based on these exposure and colors.
Basically, the exposures and colors defined with the sliders are used to scale a linear irradiance for each of the burnt light source before linearly combining them along the irradiance for non-burnt light sources.
Hence you want to adjust the slider as to minimize lighting effects and having the images look as much as possible lighting free (ie more like an albedo map).
Once satisfied pressESC
.
The program will keep running until you get another UI which is the rendering one.
If the program crashes Just after:
Creating textures for network input:Mat [ CUDALongType{0,4} ]
Diff [ CUDALongType{0,4} ]
Vdep [ CUDALongType{0,4} ]
Please see: Problem a)
Two things to note:
- To compute the irradiance maps the program does some CPU based ray-casting wich is time consuming especially wheen a lot of burnt light sources are present (One ray-cast image per light source per input view). The resolution at which these are computed is controlled with the
--texture-width
option - The first time you use the program, the torch module might be quite long to load so you might have to wait a bit to have the UI appear after the lines mentioned above.
The UI
Once the UI is up and running you'll see the UI:
At first it will be all black. To start rendering move the Im slider in A) this will position the camera to the position of the corresponding image.
You can bump Number of input to 8 and in 1) you will see the reprojection and corresponding mirror reprojections. At this stage the network is not running.
Just check run network
in A).
You can move the camera with QWE/ASD like in a First Person game.
This allows you to do view synthesis only so far.
The rendering is visible in 2)
Lets break down each pannel of the ui
-
A)
-
Im
will position the camera at the corresponding image location -
Number of inputs
controls the debug visualisation of the different inputs it does not effect rendering -
Output N?
is also here for debugging reasons -
Exposure
, Exposure correction of the input images to apply, this affects the rendering. -
Run Network
Wether or not to run the network interactively
-
-
B) Controls the camera
-
Speed
will control the rate of displacement in the scene, can be increased is motion is too slow with Q -
Fov
Y allows to change the rendering Fov. Expressed in degrees for the Y axis. - All the following options allow to load, play, record, save renderings of paths. For saving it is usually relative to were the program was started from
-
-
C) Post processing of the rendering
-
Exposure
modifies the exposure of the linear output of the network. -
WB temp
allows to adjust the white balance of the output -
Gamma
tone mapping gamma of the output -
Saturation
controls the output saturation. The tone mappoing is applied following the same order.
-
-
D) Object handler, it allows to control the lighting of the scene.
-
Switch Input Light
allows to turn off the input lighting -
Load new object
allows to load a light xml and associated radiance maps The new light/view buffers are visible in 3). From top to bottom: input irradiance, removed irradiance (equals 0 or input irradiance depending onSwitch Input Light
), added irradiance (when lights have been loaded), target mirror map.
-
Preprocessing your own data
Please look at the structure of provided scenes first, this should help you here.
You need a set of raw (CR2) images captured with fixed exposure, aperture, white balance etc ie only the viewpoint should change.
To correct for lens distortion and vigneting and obtain clean linear images we are going to use darktable https://www.darktable.org/install/
.
Load one of the CR2 images in dartable.
- Enable lens correction in "technical" on the right pane choosing the appropriate lens
- Under
Output color profile
make sure you use a linear one, ideally the same as the input one. - Remove the
filmic RGB
filter - Adjust the white balance Once done you can close the program
Next to the image you opened you should have a IMAGENAME.CR2.xmp file. Rename it to profile.xmp.
Add darktable/bin folder to your path to be able to use darktable_cli. ( in my case I added: C:\Program Files\darktable\bin
)
Navigate to: sibr_core\src\projects\indoor_relighting\preprocess\converters
Run python CR2toEXR.py PATH_TO_FOLDER_WITH_CR2 PATH_TO/profile.xmp
You should now have linear RGB exr images.
You can move them in a new exr/
folder
You now need a mesh that you can obtain using Reality Capture (RC).
Import the images in RC you can use either HDR or tonemapped images, we will export exr images at the end though. Launch the Alignement. Launch the reconstruction. Go to Alignement->Export->Registration. Export as bundler (negative z) with name bundle.out. Make sure Images are exported as exr and half-float or float RGB under export images. Go to Mesh, colorize it. Then export dense mesh as recon.ply.
Organize your data as follow:
scene/
├─ meshes/
│ ├─ recon.ply
├─ cameras/
│ ├─ bundle.out
├─ images/
│ ├─ ALL_THE.exr
Now go again to sibr_core\src\projects\indoor_relighting\preprocess\converters
And run
python .\createSceneMetadataEXR.py PATH_TO\scene
You may have ton install openexr which you can download a wheel for here: https://www.lfd.uci.edu/~gohlke/pythonlibs/#openexr
You should now have:
scene/
├─ scene_metadata.txt
├─ meshes/
│ ├─ recon.ply
├─ cameras/
│ ├─ bundle.out
├─ images/
│ ├─ ALL_THE.exr
You now need to create a scale.txt
next to scene_metadata.txt
which contains the scale of the scene. This is a rough estimate of a 1m size in scene space (handwritten by measuring a 1m object in meshlab) (see available scenes for examples)
You should be ready!
A few additional comments:
Our scenes folder usually contain 5 different elements:
- RAW Folder containing the originals CR2 shots and a profile.xmp file. The images here are around 6000 pixels in width
- exr Folder containing the lens corrected linear exr downsampled at 3000 pixels width. Going from the previous folder to this one was done by carefully creating a profile.xmp for the dataset using darktable then running CR2toEXR.py (can be found at install/scripts after cmake INSTALL or PREBUILD). These are the images used for the reality capture reconstruction
- Myscene.rcproj and Myscene folder which are the saved rcproject. It may be missing for some scenes because of a crash...
- scene folder that contains the data to be loaded by the relighter. In this folder you will find the following elements:
- *.exr the exr files exported from RC (eventually undistorted) that were downscaled with a factor 3 (about 1000px width now). They are of different sizes and may have a black border but this is handled correctly by the relighter.
- bundle.out which is the camera calibration for our exr files.
- list_images.txt that works as usual. Generated using createListImagesEXR.py (can be found at install/scripts after cmake INSTALL or PREBUILD)
- recon.ply -> pmvs_recon_normals.ply which are the output of rc, the version simplified in meshlab, then the version smoothed and with planar normals generated by our program.
- scale.txt which contain the scale of the scene. This is a rough estimate of a 1m size in scene space (handwritten by measuring a 1m object in meshlab)
- lights.ply which contain spheres of the detected lights. This spheres are used for direct sampling of burnt light sources
- pc_light.ply which is the volumetric point cloud of detected lights. This point cloud is used to compute the spheres mentioned above with a cell-merging algorithm.
- light_colors.txt and light_exposures.txt. This are the color and exposure factors used to combined burnt light source with the environment radiance.
- tempRad this contains the radiances of each image for each light sources (global lighting + individually detected burnt light sources) as exr files. If you use the exposures and colors above to combine those you will get the final radiance. This folder is used to save intermediate computation that are quite long.
- 16bitData that contains all our buffers (Images,Specular layers,combined Radiance) in a 16bit format (except albedo which is 8 bit). A tone mapping was applied before saving to keep a good precision even for value above 1.
- coloredMesh which is the mesh colorized with the albedo images. This is the one used with mitsuba to compute the new light sources radiance.
Relighting and Adding your own lights
Lights are modeled in the form of an xml file such a this one:
<scene version="0.6.0">
<shape type="sphere">
<point name="center" x="-13.329103" y="21.697855" z="10.356039"/>
<float name="radius" value="1.417335"/>
<emitter type="area">
<spectrum name="radiance" value="1.000000"/>
</emitter>
</shape>
</scene>
The radiance value should be left to 1. You can edit the radius and center freely.
Lights files should be stored in a lightings/
folder for instance:
scene/
├─ scene_metadata.txt
├─ meshes/
│ ├─ recon.ply
├─ cameras/
│ ├─ bundle.out
├─ images/
│ ├─ ALL_THE.exr
├─ lightings/
│ ├─ outLight0.xml
├─ coloredMesh/
│ ├─ scene.xml
│ ├─ mesh.ply
│ ├─ cameras.lookat
│ ├─ imageSizes.txt
Note the presence of the coloredMesh
folder here. It was created while first preprocessing the scene. If you don't have it you should first complete the steps above until you can run view synthesis.
Once you have created a light file you need to render its corresponding irradiance. This is easy thanks to Mitsuba 3! In your python environment:
pip install mitsuba
Then you can run \sibr_core\src\projects\indoor_relighting\preprocess\renderNewLights.py PATH_TO_SCENE LIGHT_NAME
where LIGHT_NAME does not contain .xml, for instance outLight0
.
Make sure you install all python dependencies if you don't have them.
You can now import this light in the renderer.
In part D) of the UI enter the name of the light with xml extension, for instance outLight0.xml
and load it. You should now have a new UI showing up that look like this:
You can switch off the input light for instance, choose the color of the light and it's intensity. You can switch off the light or load an other one. If you load several lights you can modify them individually by switching the Object number.