SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 211220 of 271 papers

TitleStatusHype
Survey of Deep Learning Methods for Inverse Problems0
ADOP: Approximate Differentiable One-Pixel Point RenderingCode2
Accelerating Inverse Rendering By Using a GPU and Reuse of Light Paths0
Identity-Expression Ambiguity in 3D Morphable Face Models0
Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting0
Spatially and color consistent environment lighting estimation using deep neural networks for mixed reality0
Differentiable Surface Rendering via Non-Differentiable Sampling0
Modeling Clothing as a Separate Layer for an Animatable Human Avatar0
Joint Learning of Portrait Intrinsic Decomposition and Relighting0
NormalFusion: Real-Time Acquisition of Surface Normals for High-Resolution RGB-D Scanning0
Show:102550
← PrevPage 22 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified