SOTAVerified

Premise Selection for Theorem Proving by Deep Graph Embedding

2017-09-28NeurIPS 2017Code Available0· sign in to hype

Mingzhe Wang, Yihe Tang, Jian Wang, Jia Deng

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture. We represent a higher-order logic formula as a graph that is invariant to variable renaming but still fully preserves syntactic and semantic information. We then embed the graph into a vector via a novel embedding method that preserves the information of edge ordering. Our approach achieves state-of-the-art results on the HolStep dataset, improving the classification accuracy from 83% to 90.3%.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
HolStep (Conditional)FormulaNetClassification Accuracy0.9Unverified
HolStep (Conditional)FormulaNet-basicClassification Accuracy0.89Unverified
HolStep (Unconditional)FormulaNetClassification Accuracy0.9Unverified
HolStep (Unconditional)FormulaNet-basicClassification Accuracy0.89Unverified

Reproductions