SOTAVerified

German Text Embedding Clustering Benchmark

2024-01-05Code Available1· sign in to hype

Silvan Wehrli, Bert Arnrich, Christopher Irrgang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This work introduces a benchmark assessing the performance of clustering German text embeddings in different domains. This benchmark is driven by the increasing use of clustering neural text embeddings in tasks that require the grouping of texts (such as topic modeling) and the need for German resources in existing benchmarks. We provide an initial analysis for a range of pre-trained mono- and multilingual models evaluated on the outcome of different clustering algorithms. Results include strong performing mono- and multilingual models. Reducing the dimensions of embeddings can further improve clustering. Additionally, we conduct experiments with continued pre-training for German BERT models to estimate the benefits of this additional training. Our experiments suggest that significant performance improvements are possible for short text. All code and datasets are publicly available.

Tasks

Reproductions