SOTAVerified

Reduced Density Matrices Through Machine Learning

2026-03-18Unverified0· sign in to hype

Awwab A. Azam, Lexu Zhao, Jiabin Yu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

n-particle reduced density matrices (n-RDMs) play a central role in understanding correlated phases of matter, but their calculation is often computationally inefficient for strongly-correlated states at large system sizes. In this work, we use neural network (NN) architectures to accelerate and even predict n-RDMs for large systems. Our underlying intuition is that, for gapped states, n-RDMs are often smooth functions over the Brillouin zone (BZ) and are therefore interpolable, allowing NNs trained on small-size systems to predict large-size ones. Building on this, we devise two NNs: (i) a self-attention NN that maps random RDMs to physical ones, and (ii) a Sinusoidal Representation Network (SIREN) that directly maps momentum-space coordinates to RDM values. We test the NNs on RDMs in three 2D models: the pair-pair correlation functions of the Richardson model of superconductivity, the translationally-invariant Hartree-Fock (HF) 1-RDM in a four-band repulsive model, and the translation-breaking HF 1-RDM in the half-filled Hubbard model. We find that a SIREN trained on a 6 6 momentum mesh and a SIREN trained on 4 tilted meshes (each of which has 12 momentum points) can predict the 18 18 pair-pair correlation function with a relative accuracy of 94.29\% and 93.77\%, respectively. NNs trained on 6 6 and 8 8 meshes provide high-quality initial guesses for 50 50 translation-invariant HF and 30 30 fully translation-breaking-allowed HF, reducing the required number of iterations by up to 91.63\% and 92.78\%, respectively, compared to random initializations. Our results illustrate the potential of NN-based methods for interpolable n-RDMs, which might open a new avenue for future research on strongly correlated phases.

Reproductions