SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 201210 of 271 papers

TitleStatusHype
PANDORA: Polarization-Aided Neural Decomposition Of Radiance0
Inferring Articulated Rigid Body Dynamics from RGBD VideoCode3
Spatio-Temporal Outdoor Lighting Aggregation on Image Sequences using Transformer Networks0
Spelunking the Deep: Guaranteed Queries on General Neural Implicit Surfaces via Range AnalysisCode1
NeAT: Neural Adaptive TomographyCode1
CLA-NeRF: Category-Level Articulated Neural Radiance Field0
Differentiable Neural Radiosity0
PhyIR: Physics-Based Inverse Rendering for Panoramic Indoor ImagesCode1
Extracting Triangular 3D Models, Materials, and Lighting From ImagesCode2
Advances in Neural RenderingCode1
Show:102550
← PrevPage 21 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified