SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 171180 of 271 papers

TitleStatusHype
Learning to Rasterize DifferentiablyCode0
Multi-view Inverse Rendering for Large-scale Real-world Indoor ScenesCode1
Learning-based Inverse Rendering of Complex Indoor Scenes with Differentiable Monte Carlo Raytracing0
IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance FieldsCode1
Random Weight Factorization Improves the Training of Continuous Neural Representations0
IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View SynthesisCode1
A General Scattering Phase Function for Inverse Rendering0
Polarimetric Inverse Rendering for Transparent Shapes ReconstructionCode1
PRIF: Primary Ray-based Implicit Function0
Deep Uncalibrated Photometric Stereo via Inter-Intra Image Feature Fusion0
Show:102550
← PrevPage 18 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified