SOTAVerified

Generating Datasets with Pretrained Language Models

2021-04-15EMNLP 2021Code Available1· sign in to hype

Timo Schick, Hinrich Schütze

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either be augmented with additional pretraining objectives or finetuned on a large set of labeled text pairs. While the latter approach typically outperforms the former, it requires great human effort to generate suitable datasets of sufficient size. In this paper, we show how PLMs can be leveraged to obtain high-quality sentence embeddings without the need for labeled data, finetuning or modifications to the pretraining objective: We utilize the generative abilities of large and high-performing PLMs to generate entire datasets of labeled text pairs from scratch, which we then use for finetuning much smaller and more efficient models. Our fully unsupervised approach outperforms strong baselines on several semantic textual similarity datasets.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
SICKDino (STS/̄🦕)Spearman Correlation0.74Unverified
SICKDino (STSb/̄🦕)Spearman Correlation0.68Unverified
STS12Dino (STSb/̄🦕)Spearman Correlation0.7Unverified
STS13Dino (STSb/̄🦕)Spearman Correlation0.81Unverified
STS14Dino (STSb/̄🦕)Spearman Correlation0.71Unverified
STS15Dino (STSb/)Spearman Correlation0.8Unverified
STS16Dino (STSb/̄🦕)Spearman Correlation0.77Unverified
STS BenchmarkDino (STS/̄🦕)Spearman Correlation0.77Unverified
STS BenchmarkDino (STSb/̄🦕)Spearman Correlation0.78Unverified

Reproductions