SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 8190 of 271 papers

TitleStatusHype
Deep Face Feature for Face Alignment0
Dr.Hair: Reconstructing Scalp-Connected Hair Strands without Pre-training via Differentiable Rendering of Line Segments0
Dressi: A Hardware-Agnostic Differentiable Renderer with Reactive Shader Packing and Soft Rasterization0
Deep Direct Volume Rendering: Learning Visual Feature Mappings From Exemplary Images0
Physics-based Indirect Illumination for Inverse Rendering0
Digital Twin Catalog: A Large-Scale Photorealistic 3D Object Digital Twin Dataset0
DiffusionRenderer: Neural Inverse and Forward Rendering with Video Diffusion Models0
Diffusion Renderer: Neural Inverse and Forward Rendering with Video Diffusion Models0
A Bayesian Inference Framework for Procedural Material Parameter Estimation0
Diffusion Reflectance Map: Single-Image Stochastic Inverse Rendering of Illumination and Reflectance0
Show:102550
← PrevPage 9 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified