HellaSwag: Can a Machine Really Finish Your Sentence?
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi
Code Available — Be the first to reproduce this paper.
ReproduceCode
Abstract
Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as "A woman sits at a piano," a machine must select the most likely followup: "She sets her fingers on the keys." With the introduction of BERT, near human-level performance was reached. Does this mean that machines can perform human level commonsense inference? In this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset. Though its questions are trivial for humans (>95% accuracy), state-of-the-art models struggle (<48%). We achieve this via Adversarial Filtering (AF), a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers. AF proves to be surprisingly robust. The key insight is to scale up the length and complexity of the dataset examples towards a critical 'Goldilocks' zone wherein generated text is ridiculous to humans, yet often misclassified by state-of-the-art models. Our construction of HellaSwag, and its resulting difficulty, sheds light on the inner workings of deep pretrained models. More broadly, it suggests a new path forward for NLP research, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| HellaSwag | BERT-Large 340M | Accuracy | 47.3 | — | Unverified |
| HellaSwag | GPT-1 117M | Accuracy | 41.7 | — | Unverified |
| HellaSwag | BERT-Base 110M | Accuracy | 40.5 | — | Unverified |
| HellaSwag | LSTM + BERT-Base | Accuracy | 36.2 | — | Unverified |
| HellaSwag | ESIM + ElMo | Accuracy | 33.3 | — | Unverified |
| HellaSwag | LSTM + GloVe | Accuracy | 31.7 | — | Unverified |
| HellaSwag | fastText | Accuracy | 31.6 | — | Unverified |
| HellaSwag | LSTM + ElMo | Accuracy | 31.4 | — | Unverified |
| HellaSwag | Random chance baseline | Accuracy | 25 | — | Unverified |