Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/nvlabs/instant-ngpOfficialIn paperpytorch★ 17,329
- github.com/nvlabs/tiny-cuda-nnOfficialIn paperpytorch★ 4,445
- github.com/nerfstudio-project/nerfstudiojax★ 11,344
- github.com/ashawkey/torch-ngppytorch★ 2,211
- github.com/nvidiagameworks/kaolin-wisppytorch★ 1,496
- github.com/kair-bair/nerfaccpytorch★ 1,459
- github.com/kwea123/ngp_plpytorch★ 1,290
- github.com/yashbhalgat/HashNeRF-pytorchpytorch★ 1,035
- github.com/Jittor/JNeRFnone★ 646
- github.com/bycloudai/instant-ngp-windowsnone★ 502
Abstract
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920\!\!1080.