SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 231240 of 271 papers

TitleStatusHype
Joint Learning of Portrait Intrinsic Decomposition and Relighting0
NormalFusion: Real-Time Acquisition of Surface Normals for High-Resolution RGB-D Scanning0
OpenRooms: An Open Framework for Photorealistic Indoor Scene Datasets0
Deep Direct Volume Rendering: Learning Visual Feature Mappings From Exemplary Images0
Spectral MVIR: Joint Reconstruction of 3D Shape and Spectral Reflectance0
PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting0
Neural Lumigraph RenderingCode0
Deep Learning compatible Differentiable X-ray Projections for Inverse Rendering0
Uncalibrated Neural Inverse Rendering for Photometric Stereo of General Surfaces0
MaterialGAN: Reflectance Capture using a Generative SVBRDF Model0
Show:102550
← PrevPage 24 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified