SOTAVerified

Can Transformers Reason in Fragments of Natural Language?

2022-11-10Code Available0· sign in to hype

Viktor Schlegel, Kamen V. Pavlov, Ian Pratt-Hartmann

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

State-of-the-art deep-learning-based approaches to Natural Language Processing (NLP) are credited with various capabilities that involve reasoning with natural language texts. In this paper we carry out a large-scale empirical study investigating the detection of formally valid inferences in controlled fragments of natural language for which the satisfiability problem becomes increasingly complex. We find that, while transformer-based language models perform surprisingly well in these scenarios, a deeper analysis re-veals that they appear to overfit to superficial patterns in the data rather than acquiring the logical principles governing the reasoning in these fragments.

Tasks

Reproductions