SOTAVerified

Contrastive Learning in Distilled Models

2024-01-23Code Available0· sign in to hype

Valerie Lim, Kai Wen Ng, Kenneth Lim

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Natural Language Processing models like BERT can provide state-of-the-art word embeddings for downstream NLP tasks. However, these models yet to perform well on Semantic Textual Similarity, and may be too large to be deployed as lightweight edge applications. We seek to apply a suitable contrastive learning method based on the SimCSE paper, to a model architecture adapted from a knowledge distillation based model, DistilBERT, to address these two issues. Our final lightweight model DistilFace achieves an average of 72.1 in Spearman's correlation on STS tasks, a 34.2 percent improvement over BERT base.

Tasks

Reproductions