SOTAVerified

Mathematical Reasoning via Self-supervised Skip-tree Training

2020-06-08ICLR 2021Unverified0· sign in to hype

Markus N. Rabe, Dennis Lee, Kshitij Bansal, Christian Szegedy

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We examine whether self-supervised language modeling applied to mathematical formulas enables logical reasoning. We suggest several logical reasoning tasks that can be used to evaluate language models trained on formal mathematical statements, such as type inference, suggesting missing assumptions and completing equalities. To train language models for formal mathematics, we propose a novel skip-tree task. We find that models trained on the skip-tree task show surprisingly strong mathematical reasoning abilities, and outperform models trained on standard skip-sequence tasks. We also analyze the models' ability to formulate new conjectures by measuring how often the predictions are provable and useful in other proofs.

Tasks

Reproductions