Categories Machine Learning

Catching Light in the Act: Neural Inverse Rendering from Propagating Light

CVPR 2025 Best Student Paper

Press enter or click to view image in full size

How can neural networks understand multi-path interactions? Image created with DALL-E.

When you flip a switch in a dark room, the light seems instantaneous. In reality, photons race outward at the cosmic speed limit, reflecting, scattering, and bouncing in countless directions. Until now, this invisible ballet of light has been beyond the reach of everyday imaging systems. Cameras and even advanced laser scanners, such as lidar, usually reduce the complexity of light propagation into simple snapshots of surfaces.

At the 2025 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), a group of researchers presented something extraordinary: the first system to faithfully capture, model, and even replay the propagation of light itselfas it scatters through a scene. Their paper, Neural Inverse Rendering from Propagating Light, not only won the Best Student Paper Award, but also marks a milestone in how machines — and perhaps soon humans — can see the world.

A Leap Beyond Conventional Lidar

Lidar, familiar from self-driving cars and drones, works like a stop-watch for photons. A laser pulse is emitted, bounces off an object, and the sensor measures the return time. This produces a 3D “point cloud” of the environment. But this system only trusts the direct