SOTAVerified

Representation of ambiguity in pretrained models and the problem of domain specificity

2021-12-17ACL ARR December 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recent developments in pretrained language models have led to many advances in NLP. These models have excelled at learning powerful contextual representations from very large corpora. Fine-tuning these models for downstream tasks has been one of the most used (and successful) approaches to solving a plethora of NLP problems. But how capable are these models in capturing subtle linguistic traits like ambiguity in their representations? We present results from a probing task designed to test the capability of the models to identify ambiguous sentences under different experimental settings. The results show how different pretrained models fare against each other in the same task. We also explore how domain specificity limits the representational capabilities of the probes.

Tasks

Reproductions