Verifying And Interpreting Neural Networks using Finite Automata
2022-11-02Code Available0· sign in to hype
Marco Sälzer, Eric Alsmann, Florian Bruse, Martin Lange
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/marcosaelzer/nn2nfaOfficialIn papernone★ 0
Abstract
Verifying properties and interpreting the behaviour of deep neural networks (DNN) is an important task given their ubiquitous use in applications, including safety-critical ones, and their black-box nature. We propose an automata-theoric approach to tackling problems arising in DNN analysis. We show that the input-output behaviour of a DNN can be captured precisely by a (special) weak B\"uchi automaton and we show how these can be used to address common verification and interpretation tasks of DNN like adversarial robustness or minimum sufficient reasons.