SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 241250 of 271 papers

TitleStatusHype
Object-based Illumination Estimation with Rendering-aware Neural Networks0
Single-Shot Neural Relighting and SVBRDF Estimation0
OpenRooms: An End-to-End Open Framework for Photorealistic Indoor Scene Datasets0
Polarimetric Multi-View Inverse Rendering0
Q-NET: A Network for Low-Dimensional Integrals of Neural Proxies0
Non-Line-of-Sight Surface Reconstruction Using the Directional Light-Cone Transform0
Inverse Rendering Techniques for Physically Grounded Image Editing0
A Bayesian Inference Framework for Procedural Material Parameter Estimation0
Refining 6D Object Pose Predictions using Abstract Render-and-Compare0
Deep Single-Image Portrait RelightingCode0
Show:102550
← PrevPage 25 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified