SOTAVerified

Linguistic Frameworks Go Toe-to-Toe at Neuro-Symbolic Language Modeling

2021-12-15NAACL 2022Code Available0· sign in to hype

Jakob Prange, Nathan Schneider, Lingpeng Kong

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We examine the extent to which, in principle, linguistic graph representations can complement and improve neural language modeling. With an ensemble setup consisting of a pretrained Transformer and ground-truth graphs from one of 7 different formalisms, we find that, overall, semantic constituency structures are most useful to language modeling performance -- outpacing syntactic constituency structures as well as syntactic and semantic dependency structures. Further, effects vary greatly depending on part-of-speech class. In sum, our findings point to promising tendencies in neuro-symbolic language modeling and invite future research quantifying the design choices made by different formalisms.

Tasks

Reproductions