SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 201210 of 271 papers

TitleStatusHype
Modeling Clothing as a Separate Layer for an Animatable Human Avatar0
Factored-NeuS: Reconstructing Surfaces, Illumination, and Materials of Possibly Glossy Objects0
Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering0
Flash-Splat: 3D Reflection Removal with Flash Cues and Gaussian Splats0
G3FA: Geometry-guided GAN for Face Animation0
GAN2X: Non-Lambertian Inverse Rendering of Image GANs0
GaNI: Global and Near Field Illumination Aware Neural Inverse Rendering0
Generative Detail Enhancement for Physically Based Materials0
GenLit: Reformulating Single-Image Relighting as Video Generation0
GeoSplatting: Towards Geometry Guided Gaussian Splatting for Physically-based Inverse Rendering0
Show:102550
← PrevPage 21 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified