SOTAVerified

Inverse Rendering

Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.

Papers

Showing 5175 of 271 papers

TitleStatusHype
A Morphable Face Albedo ModelCode1
Sobolev Training for Implicit Neural Representations with Approximated Image DerivativesCode1
Dynamic Scene Understanding through Object-Centric Voxelization and Neural RenderingCode1
Spelunking the Deep: Guaranteed Queries on General Neural Implicit Surfaces via Range AnalysisCode1
Efficient Meshy Neural Fields for Animatable Human AvatarsCode1
Advances in Neural RenderingCode1
TensoFlow: Tensorial Flow-based Sampler for Inverse RenderingCode1
IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray TracingCode1
Multiview Textured Mesh Recovery by Differentiable RenderingCode1
WildLight: In-the-wild Inverse Rendering with a FlashlightCode1
High-Quality Mesh Blendshape Generation from Face Videos via Neural Inverse RenderingCode1
MAIR++: Improving Multi-view Attention Inverse Rendering with Implicit Lighting RepresentationCode1
IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance FieldsCode1
Learning Inverse Rendering of Faces from Real-world VideosCode1
NeRD: Neural Reflectance Decomposition from Image CollectionsCode1
Self-calibrating Photometric Stereo by Neural Inverse RenderingCode1
IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View SynthesisCode1
Epi-NAF: Enhancing Neural Attenuation Fields for Limited-Angle CT with Epipolar Consistency Conditions0
Environment Maps Editing using Inverse Rendering and Adversarial Implicit Functions0
A Theory of Topological Derivatives for Inverse Rendering of Geometry0
End-to-end 3D shape inverse rendering of different classes of objects from a single input image0
Deep Polarization Cues for Single-shot Shape and Subsurface Scattering Estimation0
A Simple Approach to Differentiable Rendering of SDFs0
Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency0
Efficient multi-view training for 3D Gaussian Splatting0
Show:102550
← PrevPage 3 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Neural-PBIRHDR-PSNR26.01Unverified
2NVDiffRecMCHDR-PSNR24.43Unverified
3InvRenderHDR-PSNR23.76Unverified
4NeRFactorHDR-PSNR23.54Unverified
5NeRDHDR-PSNR23.29Unverified
6NVDiffRecHDR-PSNR22.91Unverified
7PhySGHDR-PSNR21.81Unverified