Logic and the 2-Simplicial Transformer
2020-05-01ICLR 2020Code Available1· sign in to hype
James Clift, Dmitry Doryn, Daniel Murfet, James Wallbridge
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/dmurfet/2simplicialtransformerOfficialIn papertf★ 21
Abstract
We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors. We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.