SOTAVerified

Neural Rendering

Given a representation of a 3D scene of some kind (point cloud, mesh, voxels, etc.), the task is to create an algorithm that can produce photorealistic renderings of this scene from an arbitrary viewpoint. Sometimes, the task is accompanied by image/scene appearance manipulation.

Papers

Showing 4150 of 514 papers

TitleStatusHype
GSGAN: Adversarial Learning for Hierarchical Generation of 3D Gaussian SplatsCode2
Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category ReconstructionCode2
BAD-NeRF: Bundle Adjusted Deblur Neural Radiance FieldsCode2
Multi-View Mesh Reconstruction with Neural Deferred ShadingCode2
GSplatLoc: Grounding Keypoint Descriptors into 3D Gaussian Splatting for Improved Visual LocalizationCode2
HF-NeuS: Improved Surface Reconstruction Using High-Frequency DetailsCode2
Compressing Volumetric Radiance Fields to 1 MBCode2
ADOP: Approximate Differentiable One-Pixel Point RenderingCode2
GIRAFFE: Representing Scenes as Compositional Generative Neural Feature FieldsCode2
DEGAS: Detailed Expressions on Full-Body Gaussian AvatarsCode2
Show:102550
← PrevPage 5 of 52Next →

No leaderboard results yet.