SOTAVerified

Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving

2019-10-15Code Available0· sign in to hype

Imanol Schlag, Paul Smolensky, Roland Fernandez, Nebojsa Jojic, Jürgen Schmidhuber, Jianfeng Gao

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We incorporate Tensor-Product Representations within the Transformer in order to better support the explicit representation of relation structure. Our Tensor-Product Transformer (TP-Transformer) sets a new state of the art on the recently-introduced Mathematics Dataset containing 56 categories of free-form math word-problems. The essential component of the model is a novel attention mechanism, called TP-Attention, which explicitly encodes the relations between each Transformer cell and the other cells from which values have been retrieved by attention. TP-Attention goes beyond linear combination of retrieved values, strengthening representation-building and resolving ambiguities introduced by multiple layers of standard attention. The TP-Transformer's attention maps give better insights into how it is capable of solving the Mathematics Dataset's challenging problems. Pretrained models and code will be made available after publication.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Mathematics DatasetTP-TransformerAccuracy0.82Unverified

Reproductions