SOTAVerified

Neural Rendering

Given a representation of a 3D scene of some kind (point cloud, mesh, voxels, etc.), the task is to create an algorithm that can produce photorealistic renderings of this scene from an arbitrary viewpoint. Sometimes, the task is accompanied by image/scene appearance manipulation.

Papers

Showing 231240 of 514 papers

TitleStatusHype
MeshGS: Adaptive Mesh-Aligned Gaussian Splatting for High-Quality Rendering0
UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images0
CUBE360: Learning Cubic Field Representation for Monocular 360 Depth Estimation for Virtual Reality0
Mode-GS: Monocular Depth Guided Anchored 3D Gaussian Splatting for Robust Ground-View Scene Rendering0
GMT: Enhancing Generalizable Neural Rendering via Geometry-Driven Multi-Reference Texture TransferCode0
OPONeRF: One-Point-One NeRF for Robust Neural RenderingCode0
G3R: Gradient Guided Generalizable Reconstruction0
UniCal: Unified Neural Sensor Calibration0
DeformStream: Deformation-based Adaptive Volumetric Video Streaming0
AIM 2024 Sparse Neural Rendering Challenge: Dataset and Benchmark0
Show:102550
← PrevPage 24 of 52Next →

No leaderboard results yet.