SOTAVerified

Nomic Embed Vision: Expanding the Latent Space

2024-06-06Code Available4· sign in to hype

Zach Nussbaum, Brandon Duderstadt, Andriy Mulyar

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This technical report describes the training of nomic-embed-vision, a highly performant, open-code, open-weights image embedding model that shares the same latent space as nomic-embed-text. Together, nomic-embed-vision and nomic-embed-text form the first unified latent space to achieve high performance across vision, language, and multimodal tasks.

Reproductions