SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 141150 of 271 papers

TitleStatusHype
Radar Fields: An Extension of Radiance Fields to SAR0
Random Weight Factorization Improves the Training of Continuous Neural Representations0
Refining 6D Object Pose Predictions using Abstract Render-and-Compare0
Relightable and Animatable Neural Avatar from Sparse-View Video0
Relighting Humans: Occlusion-Aware Inverse Rendering for Full-Body Human Images0
RelitLRM: Generative Relightable Radiance for Large Reconstruction Models0
ReNeRF: Relightable Neural Radiance Fields with Nearfield Lighting0
RGBX: Image decomposition and synthesis using material- and lighting-aware diffusion models0
RGS-DR: Reflective Gaussian Surfels with Deferred Rendering for Shiny Objects0
RISE-SDF: a Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering0
Show:102550
← PrevPage 15 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified