SOTAVerified

Interpretable Text Embeddings and Text Similarity Explanation: A Primer

2025-02-20Unverified0· sign in to hype

Juri Opitz, Lucas Möller, Andrianos Michail, Simon Clematide

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Text embeddings and text embedding models are a backbone of many AI and NLP systems, particularly those involving search. However, interpretability challenges persist, especially in explaining obtained similarity scores, which is crucial for applications requiring transparency. In this paper, we give a structured overview of interpretability methods specializing in explaining those similarity scores, an emerging research area. We study the methods' individual ideas and techniques, evaluating their potential for improving interpretability of text embeddings and explaining predicted similarities.

Tasks

Reproductions