SOTAVerified

Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders

2021-04-16EMNLP 2021Code Available1· sign in to hype

Fangyu Liu, Ivan Vulić, Anna Korhonen, Nigel Collier

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Pretrained Masked Language Models (MLMs) have revolutionised NLP in recent years. However, previous work has indicated that off-the-shelf MLMs are not effective as universal lexical or sentence encoders without further task-specific fine-tuning on NLI, sentence similarity, or paraphrasing tasks using annotated task data. In this work, we demonstrate that it is possible to turn MLMs into effective universal lexical and sentence encoders even without any additional data and without any supervision. We propose an extremely simple, fast and effective contrastive learning technique, termed Mirror-BERT, which converts MLMs (e.g., BERT and RoBERTa) into such encoders in 20-30 seconds without any additional external knowledge. Mirror-BERT relies on fully identical or slightly modified string pairs as positive (i.e., synonymous) fine-tuning examples, and aims to maximise their similarity during identity fine-tuning. We report huge gains over off-the-shelf MLMs with Mirror-BERT in both lexical-level and sentence-level tasks, across different domains and different languages. Notably, in the standard sentence semantic similarity (STS) tasks, our self-supervised Mirror-BERT model even matches the performance of the task-tuned Sentence-BERT models from prior work. Finally, we delve deeper into the inner workings of MLMs, and suggest some evidence on why this simple approach can yield effective universal lexical and sentence encoders.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
SICKMirror-RoBERTa-base (unsup.)Spearman Correlation0.71Unverified
SICKMirror-BERT-base (unsup.)Spearman Correlation0.7Unverified
STS12Mirror-RoBERTa-base (unsup.)Spearman Correlation0.65Unverified
STS12Mirror-BERT-base (unsup.)Spearman Correlation0.67Unverified
STS13Mirror-BERT-base (unsup.)Spearman Correlation0.8Unverified
STS13Mirror-RoBERTa-base (unsup.)Spearman Correlation0.82Unverified
STS14Mirror-BERT-base (unsup.)Spearman Correlation0.71Unverified
STS14Mirror-RoBERTa-base (unsup.)Spearman Correlation0.73Unverified
STS15Mirror-BERT-base (unsup.)Spearman Correlation0.81Unverified
STS15Mirror-RoBERTa-base (unsup.)Spearman Correlation0.8Unverified
STS16Mirror-BERT-base (unsup.)Spearman Correlation0.74Unverified
STS16Mirror-RoBERTa-base (unsup.)Spearman Correlation0.78Unverified
STS BenchmarkMirror-RoBERTa-base (unsup.)Spearman Correlation0.79Unverified
STS BenchmarkMirror-BERT-base (unsup.)Spearman Correlation0.76Unverified

Reproductions