SOTAVerified

Systematic Generalization with Edge Transformers

2021-12-01NeurIPS 2021Code Available1· sign in to hype

Leon Bergen, Timothy J. O'Donnell, Dzmitry Bahdanau

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent research suggests that systematic generalization in natural language understanding remains a challenge for state-of-the-art neural models such as Transformers and Graph Neural Networks. To tackle this challenge, we propose Edge Transformer, a new model that combines inspiration from Transformers and rule-based symbolic AI. The first key idea in Edge Transformers is to associate vector states with every edge, that is, with every pair of input nodes -- as opposed to just every node, as it is done in the Transformer model. The second major innovation is a triangular attention mechanism that updates edge representations in a way that is inspired by unification from logic programming. We evaluate Edge Transformer on compositional generalization benchmarks in relational reasoning, semantic parsing, and dependency parsing. In all three settings, the Edge Transformer outperforms Relation-aware, Universal and classical Transformer baselines.

Tasks

Reproductions