Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving
Imanol Schlag, Paul Smolensky, Roland Fernandez, Nebojsa Jojic, Jürgen Schmidhuber, Jianfeng Gao
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/ischlag/TP-TransformerOfficialIn paperpytorch★ 0
- github.com/jlrussin/interpret-math-transformerpytorch★ 8
- github.com/andrear632/ProjectDeepLearningnone★ 0
Abstract
We incorporate Tensor-Product Representations within the Transformer in order to better support the explicit representation of relation structure. Our Tensor-Product Transformer (TP-Transformer) sets a new state of the art on the recently-introduced Mathematics Dataset containing 56 categories of free-form math word-problems. The essential component of the model is a novel attention mechanism, called TP-Attention, which explicitly encodes the relations between each Transformer cell and the other cells from which values have been retrieved by attention. TP-Attention goes beyond linear combination of retrieved values, strengthening representation-building and resolving ambiguities introduced by multiple layers of standard attention. The TP-Transformer's attention maps give better insights into how it is capable of solving the Mathematics Dataset's challenging problems. Pretrained models and code will be made available after publication.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Mathematics Dataset | TP-Transformer | Accuracy | 0.82 | — | Unverified |