SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 221230 of 271 papers

TitleStatusHype
OpenRooms: An Open Framework for Photorealistic Indoor Scene Datasets0
Deep Direct Volume Rendering: Learning Visual Feature Mappings From Exemplary Images0
NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown IlluminationCode1
Spectral MVIR: Joint Reconstruction of 3D Shape and Spectral Reflectance0
PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting0
Neural Lumigraph RenderingCode0
Outdoor inverse rendering from a single image using multiview self-supervisionCode1
Deep Learning compatible Differentiable X-ray Projections for Inverse Rendering0
Uncalibrated Neural Inverse Rendering for Photometric Stereo of General Surfaces0
NeRD: Neural Reflectance Decomposition from Image CollectionsCode1
Show:102550
← PrevPage 23 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified