Convex and Bilevel Optimization for Neuro-Symbolic Inference and Learning
Charles Dickens, Changyu Gao, Connor Pryor, Stephen Wright, Lise Getoor
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/linqs/dickens-icml24OfficialIn paperpytorch★ 0
- github.com/convexbilevelnesylearning/experimentscriptsOfficialIn paperpytorch★ 0
- github.com/convexbilevelnesylearning/pslOfficialIn papernone★ 0
Abstract
We leverage convex and bilevel optimization techniques to develop a general gradient-based parameter learning framework for neural-symbolic (NeSy) systems. We demonstrate our framework with NeuPSL, a state-of-the-art NeSy architecture. To achieve this, we propose a smooth primal and dual formulation of NeuPSL inference and show learning gradients are functions of the optimal dual variables. Additionally, we develop a dual block coordinate descent algorithm for the new formulation that naturally exploits warm-starts. This leads to over 100x learning runtime improvements over the current best NeuPSL inference method. Finally, we provide extensive empirical evaluations across 8 datasets covering a range of tasks and demonstrate our learning framework achieves up to a 16% point prediction performance improvement over alternative learning methods.