SOTAVerified

DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings

2022-04-21NAACL 2022Code Available2· sign in to hype

Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljačić, Shang-Wen Li, Wen-tau Yih, Yoon Kim, James Glass

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning (Dangovski et al., 2021), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
STS12DiffCSE-RoBERTa-baseSpearman Correlation0.7Unverified
STS12DiffCSE-BERT-baseSpearman Correlation0.72Unverified
STS13DiffCSE-BERT-baseSpearman Correlation0.84Unverified
STS13DiffCSE-RoBERTa-baseSpearman Correlation0.83Unverified
STS14DiffCSE-BERT-baseSpearman Correlation0.76Unverified
STS14DiffCSE-RoBERTa-baseSpearman Correlation0.75Unverified
STS15DiffCSE-RoBERTa-baseSpearman Correlation0.83Unverified
STS15DiffCSE-BERT-baseSpearman Correlation0.84Unverified
STS16DiffCSE-RoBERTa-baseSpearman Correlation0.82Unverified
STS16DiffCSE-BERT-baseSpearman Correlation0.81Unverified

Reproductions