SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 1120 of 271 papers

TitleStatusHype
Modular Primitives for High-Performance Differentiable RenderingCode2
SVG-IR: Spatially-Varying Gaussian Splatting for Inverse RenderingCode1
TensoFlow: Tensorial Flow-based Sampler for Inverse RenderingCode1
Materialist: Physically Based Editing Using Single-Image Inverse RenderingCode1
PBR-NeRF: Inverse Rendering with Physics-Based Neural FieldsCode1
Triplet: Triangle Patchlet for Mesh-Based Inverse Rendering and Scene Parameters ApproximationCode1
A General Albedo Recovery Approach for Aerial Photogrammetric Images through Inverse RenderingCode1
MAIR++: Improving Multi-view Attention Inverse Rendering with Implicit Lighting RepresentationCode1
PIR: Photometric Inverse Rendering with Shading Cues Modeling and Surface Reflectance RegularizationCode1
Dynamic Scene Understanding through Object-Centric Voxelization and Neural RenderingCode1
Show:102550
← PrevPage 2 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified