SOTAVerified

Harnessing the Universal Geometry of Embeddings

2025-05-18Code Available3· sign in to hype

Rishi Jha, Collin Zhang, Vitaly Shmatikov, John X. Morris

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce the first method for translating text embeddings from one vector space to another without any paired data, encoders, or predefined sets of matches. Our unsupervised approach translates any embedding to and from a universal latent representation (i.e., a universal semantic structure conjectured by the Platonic Representation Hypothesis). Our translations achieve high cosine similarity across model pairs with different architectures, parameter counts, and training datasets. The ability to translate unknown embeddings into a different space while preserving their geometry has serious implications for the security of vector databases. An adversary with access only to embedding vectors can extract sensitive information about the underlying documents, sufficient for classification and attribute inference.

Tasks

Reproductions