Premise Selection for Theorem Proving by Deep Graph Embedding
2017-09-28NeurIPS 2017Code Available0· sign in to hype
Mingzhe Wang, Yihe Tang, Jian Wang, Jia Deng
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/princeton-vl/FormulaNetOfficialpytorch★ 0
Abstract
We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture. We represent a higher-order logic formula as a graph that is invariant to variable renaming but still fully preserves syntactic and semantic information. We then embed the graph into a vector via a novel embedding method that preserves the information of edge ordering. Our approach achieves state-of-the-art results on the HolStep dataset, improving the classification accuracy from 83% to 90.3%.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| HolStep (Conditional) | FormulaNet | Classification Accuracy | 0.9 | — | Unverified |
| HolStep (Conditional) | FormulaNet-basic | Classification Accuracy | 0.89 | — | Unverified |
| HolStep (Unconditional) | FormulaNet | Classification Accuracy | 0.9 | — | Unverified |
| HolStep (Unconditional) | FormulaNet-basic | Classification Accuracy | 0.89 | — | Unverified |