SOTAVerified

Exploiting Language Model Prompts Using Similarity Measures: A Case Study on the Word-in-Context Task

2022-05-01ACL 2022Unverified0· sign in to hype

Mohsen Tabasi, Kiamehr Rezaee, Mohammad Taher Pilehvar

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

As a recent development in few-shot learning, prompt-based techniques have demonstrated promising potential in a variety of natural language processing tasks. However, despite proving competitive on most tasks in the GLUE and SuperGLUE benchmarks, existing prompt-based techniques fail on the semantic distinction task of the Word-in-Context (WiC) dataset. Specifically, none of the existing few-shot approaches (including the in-context learning of GPT-3) can attain a performance that is meaningfully different from the random baseline.Trying to fill this gap, we propose a new prompting technique, based on similarity metrics, which boosts few-shot performance to the level of fully supervised methods. Our simple adaptation shows that the failure of existing prompt-based techniques in semantic distinction is due to their improper configuration, rather than lack of relevant knowledge in the representations. We also show that this approach can be effectively extended to other downstream tasks for which a single prompt is sufficient.

Tasks

Reproductions