SOTAVerified

Scaling Sentence Embeddings with Large Language Models

2023-07-31Code Available1· sign in to hype

Ting Jiang, Shaohan Huang, Zhongzhi Luan, Deqing Wang, Fuzhen Zhuang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Large language models (LLMs) have recently garnered significant interest. With in-context learning, LLMs achieve impressive results in various natural language tasks. However, the application of LLMs to sentence embeddings remains an area of ongoing research. In this work, we propose an in-context learning-based method aimed at improving sentence embeddings performance. Our approach involves adapting the previous prompt-based representation method for autoregressive models, constructing a demonstration set that enables LLMs to perform in-context learning, and scaling up the LLMs to different model sizes. Through extensive experiments, in-context learning enables LLMs to generate high-quality sentence embeddings without any fine-tuning. It helps LLMs achieve performance comparable to current contrastive learning methods. By scaling model size, we find scaling to more than tens of billion parameters harms the performance on semantic textual similarity (STS) tasks. However, the largest model outperforms other counterparts and achieves the new state-of-the-art result on transfer tasks. We also fine-tune LLMs with current contrastive learning approach, and the 2.7B OPT model, incorporating our prompt-based method, surpasses the performance of 4.8B ST5, achieving the new state-of-the-art results on STS tasks. Our code is available at https://github.com/kongds/scaling_sentemb.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
SICKPromptEOL+CSE+OPT-2.7BSpearman Correlation0.81Unverified
SICKPromptEOL+CSE+LLaMA-30BSpearman Correlation0.82Unverified
SICKPromptEOL+CSE+OPT-13BSpearman Correlation0.82Unverified
STS12PromptEOL+CSE+LLaMA-30BSpearman Correlation0.8Unverified
STS12PromptEOL+CSE+OPT-2.7BSpearman Correlation0.79Unverified
STS12PromptEOL+CSE+OPT-13BSpearman Correlation0.8Unverified
STS13PromptEOL+CSE+LLaMA-30BSpearman Correlation0.9Unverified
STS13PromptEOL+CSE+OPT-2.7BSpearman Correlation0.9Unverified
STS13PromptEOL+CSE+OPT-13BSpearman Correlation0.9Unverified
STS14PromptEOL+CSE+LLaMA-30BSpearman Correlation0.86Unverified
STS14PromptEOL+CSE+OPT-13BSpearman Correlation0.85Unverified
STS14PromptEOL+CSE+OPT-2.7BSpearman Correlation0.85Unverified
STS15PromptEOL+CSE+OPT-2.7BSpearman Correlation0.9Unverified
STS15PromptEOL+CSE+LLaMA-30BSpearman Correlation0.9Unverified
STS15PromptEOL+CSE+OPT-13BSpearman Correlation0.9Unverified
STS16PromptEOL+CSE+OPT-13BSpearman Correlation0.86Unverified
STS16PromptEOL+CSE+LLaMA-30BSpearman Correlation0.86Unverified
STS16PromptEOL+CSE+OPT-2.7BSpearman Correlation0.86Unverified
STS BenchmarkPromptEOL+CSE+LLaMA-30BSpearman Correlation0.89Unverified
STS BenchmarkPromptEOL+CSE+OPT-2.7BSpearman Correlation0.88Unverified
STS BenchmarkPromptEOL+CSE+OPT-13BSpearman Correlation0.89Unverified

Reproductions