SOTAVerified

Individuation in Neural Models with and without Visual Grounding

2024-09-27Unverified0· sign in to hype

Alexey Tikhonov, Lisa Bylinina, Ivan P. Yamshchikov

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We show differences between a language-and-vision model CLIP and two text-only models - FastText and SBERT - when it comes to the encoding of individuation information. We study latent representations that CLIP provides for substrates, granular aggregates, and various numbers of objects. We demonstrate that CLIP embeddings capture quantitative differences in individuation better than models trained on text-only data. Moreover, the individuation hierarchy we deduce from the CLIP embeddings agrees with the hierarchies proposed in linguistics and cognitive science.

Tasks

Reproductions