SyncLight: Controllable and Consistent Multi-View Relighting

User only asks for editing one reference view and our model propagates edits to all other views without any camera poses.

Reference view
Reference view
Additional views
View 2 View 3 View 4 View 5
View 1 relit View 2 relit View 3 relit View 4 relit View 5 relit

SyncLight enables consistent multi-view relighting by editing a single reference view. Users draw a lightmap on one image, and our multi-view diffusion transformer automatically propagates lighting changes to all other viewpoints with geometric and photometric consistency. The method requires no camera poses, runs in one step (10-50× faster than standard diffusion), and generalizes zero-shot from 2-view training to arbitrary camera arrays.

Relighting Results

Images from RealEstate10K [Google Research, 2018] and our SyncLight dataset.

Method

SyncLight is a generative framework that enables consistent, parametric relighting across multiple uncalibrated views. Built on a multi-view diffusion model with Latent Bridge Matching, our method allows users to specify lighting changes (such as adjusting light intensity and color) on a single reference view, which then propagates coherently across all synchronized viewpoints in a single inference step. Unlike single-view methods that process each camera independently and produce geometrically inconsistent results, SyncLight explicitly models cross-view interactions through modified self-attention, learning to maintain rigorous spatial and photometric consistency from training on stereo pairs. Remarkably, despite being trained only on image pairs (N=2), the architecture generalizes zero-shot to arbitrary view counts at inference, making it suitable for multi-camera systems in virtual production, stereoscopic cinema, and sports broadcasting. We train on a large-scale hybrid dataset combining synthetic environments (Infinigen, BlenderKit) with high-fidelity real-world OLAT captures, enabling the model to bridge the sim-to-real gap while achieving 10-50× speedup over iterative diffusion baselines.

Method Overview

Multi-view Relighting

Training was done on two views but it generalizes to arbitrary number of views including videos and dense settings like 3DGS.

Light 1
Light 2
Reference view
View 1
Additional views
View 2 View 3 View 4 View 5

Color Control

Move the knob to relight images

Input Images
Reference view
Reference view input
Other view
Other view input
Relit Images
Reference view output Other view output

Intensity Control

Move the knob to relight images

Input Images
Reference view
Reference view input
Other view
Other view input
Relit Images
Reference view output Other view output

Video Relighting

Users specify lighting changes on a visible light on the first frame.

Novel View Synthesis with Radiance Fields

Users specify lighting changes on one view from an input 3D representation. SyncLight propagates the light editing across all views consistently, obtaining relit novel views.