ProSPer: Probing Human and Neural Network Language Model Understanding of Spatial Perspective
2021-11-01EMNLP (BlackboxNLP) 2021Code Available0· sign in to hype
Tessa Masis, Carolyn Anderson
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/canders1/prosperOfficialIn papertf★ 0
Abstract
Understanding perspectival language is important for applications like dialogue systems and human-robot interaction. We propose a probe task that explores how well language models understand spatial perspective. We present a dataset for evaluating perspective inference in English, ProSPer, and use it to explore how humans and Transformer-based language models infer perspective. Although the best bidirectional model performs similarly to humans, they display different strengths: humans outperform neural networks in conversational contexts, while RoBERTa excels at written genres.