SOTAVerified

Proof Artifact Co-training for Theorem Proving with Language Models

2021-02-11ICLR 2022Code Available1· sign in to hype

Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W. Ayers, Stanislas Polu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Labeled data for imitation learning of theorem proving in large libraries of formalized mathematics is scarce as such libraries require years of concentrated effort by human specialists to be built. This is particularly challenging when applying large Transformer language models to tactic prediction, because the scaling of performance with respect to model size is quickly disrupted in the data-scarce, easily-overfitted regime. We propose PACT ( Proof Artifact Co- Training), a general methodology for extracting abundant self-supervised data from kernel-level proof terms for co-training alongside the usual tactic prediction objective. We apply this methodology to Lean, an interactive proof assistant which hosts some of the most sophisticated formalized mathematics to date. We instrument Lean with a neural theorem prover driven by a Transformer language model and show that PACT improves theorem proving success rate on a held-out suite of test theorems from 32\% to 48\%.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
miniF2F-testPACT (reproduced by Thor)cumulative24.6Unverified

Reproductions