SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 191200 of 271 papers

TitleStatusHype
Learning Object-Centric Neural Scattering Functions for Free-Viewpoint Relighting and Scene Composition0
Optimization-Based Eye Tracking using Deflectometric Information0
Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition0
MEGANE: Morphable Eyeglass and Avatar Network0
Face Inverse Rendering via Hierarchical DecouplingCode0
ReNeRF: Relightable Neural Radiance Fields with Nearfield Lighting0
Neural Fields Meet Explicit Geometric Representations for Inverse Rendering of Urban Scenes0
Polarimetric Multi-View Inverse Rendering0
Physics-based Indirect Illumination for Inverse Rendering0
SupeRVol: Super-Resolution Shape and Reflectance Estimation in Inverse Volume Rendering0
Show:102550
← PrevPage 20 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified