Abstract


Neural Renderer dataflow

Neural rendering is a class of methods that use deep learning to produce novel images of scenes from more limited information than traditional rendering methods. This is useful for information scarce applications like mixed reality or semantic photosynthesis but comes at the cost of control over the final appearance. We introduce the Neural Direct-illumination Renderer (NDR), a neural screen space renderer capable of rendering direct-illumination images of any geometry, with opaque materials, under distant illuminant. The NDR uses screen space buffers describing material, geometry, and illumination as inputs to provide direct control over the output. We introduce the use of intrinsic image decomposition to allow a Convolutional Neural Network (CNN) to learn a mapping from a large number of pixel buffers to rendered images. The NDR predicts shading maps, which are subsequently combined with albedo maps to create a rendered image. We show that the NDR produces plausible images that can be edited by modifying the input maps and marginally outperforms the state of the art while also providing more functionality.


Overview


Neural Renderer network structure

We train two encoders, two decoders, and a U-Net based CNN to produce diffuse and specular shading maps that are multiplied by their respective albedos and summed to create images following the intrinsic image formation model. Our network takes parameter maps of roughness, normal, and depth, as well as spherical harmonic-encoded illumination and coarse shading maps generated with traditional deferred shading as input. The coarse shading maps are not required to produce plausible shading but increase colour consistency and the accuracy of highlight placement. We train our network by supervising both the shading maps and final image to ensure accurate reproduction of rarely visible rendering effects.


Dataset


Object generation steps for dataset

We generate a synthetic dataset of procedurally-generated objects by generating a number of geometric primitives, applying random height maps, applying textures provided by Valentin Deschaintre, and combining these with random positions and rotations. The objects and their parameter maps are rendered using a modified version of PBRT-v3 and the GGX BRDF model. We generate 60,000 training samples.


Results


Neural renderer functionality, showing correct response to changes in different material parameters
Neural renderer state of the art comparison

We demonstrate our method's ability to produce plausible renders from single viewpoint information featuring shading, shadows, specular highlights, and Fresnel effects. Our method accurately handles changes in all input parameters. We compare our method with the recent specialist state-of-the art method Real Shading, as well as the foundational general purpose image-to-image translation network pix2pix and two traditional rendering approaches. Our method produces more accurate and noise-resilient renders than other methods while also offering a greater range of functionality than Real Shading by incorporating specular shading and variable illumination.



Presentation Video



Citation



                @article{suppan2021neural,
                title={Neural Screen Space Rendering of Direct Illumination},
                author={Suppan, Christian and Chalmers, Andrew and Zhao, Junhong and Doronin, Alex and Rhee, Taehyun},
                publisher={The Eurographics Association},
                year={2021}}


Acknowledgement


This project was funded by the Smart Ideas Endeavour Fund from MBIE and in part by the Entrepreneurial University Programme from TEC in New Zealand.