SOTAVerified

Representation of Ambiguity in Pre-Trained Sentence Embeddings

2021-11-16ACL ARR November 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Pre-trained language models have been shown to be very effective for various NLP tasks. All of these models are trained on different datasets and often have different architectures. Simultaneously, various approaches for analysis of what and how is encoded in the layers of these models is also being studied. In this work we focus on ambiguous sentences and examine how they are represented at various layers of BERT and GPT-2 in comparison to unambiguous sentences. Taking ambiguity detection as a probing task, we find that layers of BERT perform better than layers of GPT-2. Using Representational Similarity Analysis, we observe that differences corresponding to varying stimuli emerge in the deeper layers and that for ambiguous sentences the dissimilarity between BERT and GPT-2 representations decreases in deeper layers. We also find that for ambiguous sentences, representational dissimilarity across layers is greater for BERT than for GPT-2.

Tasks

Reproductions