SOTAVerified

MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures

2022-07-30CVPR 2023Code Available2· sign in to hype

Zhiqin Chen, Thomas Funkhouser, Peter Hedman, Andrea Tagliasacchi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize images of 3D scenes from novel views. However, they rely upon specialized volumetric rendering algorithms based on ray marching that are mismatched to the capabilities of widely deployed graphics hardware. This paper introduces a new NeRF representation based on textured polygons that can synthesize novel images efficiently with standard rendering pipelines. The NeRF is represented as a set of polygons with textures representing binary opacities and feature vectors. Traditional rendering of the polygons with a z-buffer yields an image with features at every pixel, which are interpreted by a small, view-dependent MLP running in a fragment shader to produce a final pixel color. This approach enables NeRFs to be rendered with the traditional polygon rasterization pipeline, which provides massive pixel-level parallelism, achieving interactive frame rates on a wide range of compute platforms, including mobile phones.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LLFFMobileNeRFPSNR25.91Unverified
LLFFNeRFPSNR26.5Unverified
LLFFJAXNeRFPSNR26.92Unverified
LLFFSNeRGPSNR25.63Unverified
Mip-NeRF 360NeRF++LPIPS0.43Unverified
Mip-NeRF 360MobileNeRFLPIPS0.47Unverified
Mip-NeRF 360NeRFLPIPS0.52Unverified
NeRFSNeRGPSNR30.38Unverified
NeRFMobileNeRFPSNR30.9Unverified
NeRFNeRFPSNR31Unverified
NeRFJAXNeRFPSNR31.65Unverified

Reproductions