RNG: Relightable Neural Gaussians

1Nanjing University of Science and Technology   2Adobe Research   3Nanjing University
CVPR 2025

*Corresponding authors

We propose Relightable Neural Gaussians (RNG), a novel 3DGS-based framework that enables the relighting of objects with both hard surfaces or soft boundaries, while avoiding assumptions on the shading model.

Abstract

3D Gaussian Splatting (3DGS) has shown impressive results for the novel view synthesis task, where lighting is assumed to be fixed. However, creating relightable 3D assets, especially for objects with ill-defined shapes (fur, fabric, etc.), remains a challenging task. The decomposition between light, geometry, and material is ambiguous, especially if either smooth surface assumptions or surfacebased analytical shading models do not apply. We propose Relightable Neural Gaussians (RNG), a novel 3DGSbased framework that enables the relighting of objects with both hard surfaces or soft boundaries, while avoiding assumptions on the shading model. We condition the radiance at each point on both view and light directions. We also introduce a shadow cue, as well as a depth refinement network to improve shadow accuracy. Finally, we propose a hybrid forward-deferred fitting strategy to balance geometry and appearance quality. Our method achieves significantly faster training (1.3 hours) and rendering (60 frames per second) compared to a prior method based on neural radiance fields and produces higher-quality shadows than a concurrent 3DGS-based method.

Hybrid Forward-Deferred Pipeline

The overview of RNG. Each Gaussian point in the scene contains an extra latent vector that describes the reflectance. The latent values interpreted by an MLP decoder, conditioned on view and light directions. Training has two stages. In the first stage, we employ forward shading, where we decode all the latent vectors of Gaussian points into colors, followed by the alpha blending. In the second deferred shading stage, we first alpha-blend the neural Gaussian features to get an aggregated feature, and then we feed it to the decoder. We apply shadow mapping to obtain a shadow cue map and use the shadow cue as an extra input for the decoder in the second stage.

Depth Refinement

The effect of the depth refinement network. The weighted sum of Gaussian depths is not accurate, resulting in mismatching shadow cues. Therefore, we propose a depth refinement network to correct the depth.

Shadow Mapping

The illustration of shadow cue computation. First, we splat the Gaussians onto the camera to get depth values. Then, we run the depth refinement network to correct them and locate the shading points P. At last, we splat the shading points onto the shadow camera to find the intersections of shadow rays Q, and store the distance |PQ| as the shadow cue.

Results

BibTeX


        @article{
          author={Jiahui Fan and Fujun Luan and Jian Yang and Milos Hasan and Beibei Wang},
          title={RNG: Relightable Neural Gaussians},
          year={2025},
          journal={Proceedings of CVPR 2025},
        }