SOTAVerified

Comb Tensor Networks vs. Matrix Product States: Enhanced Efficiency in High-Dimensional Spaces

2024-12-08Code Available0· sign in to hype

Danylo Kolesnyk, Yelyzaveta Vodovozova

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Modern approaches to generative modeling of continuous data using tensor networks incorporate compression layers to capture the most meaningful features of high-dimensional inputs. These methods, however, rely on traditional Matrix Product States (MPS) architectures. Here, we demonstrate that beyond a certain threshold in data and bond dimensions, a comb-shaped tensor network architecture can yield more efficient contractions than a standard MPS. This finding suggests that for continuous and high-dimensional data distributions, transitioning from MPS to a comb tensor network representation can substantially reduce computational overhead while maintaining accuracy.

Tasks

Reproductions