SOTAVerified

A Neural Model for Regular Grammar Induction

2022-09-23Code Available0· sign in to hype

Peter Belcák, David Hofer, Roger Wattenhofer

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Grammatical inference is a classical problem in computational learning theory and a topic of wider influence in natural language processing. We treat grammars as a model of computation and propose a novel neural approach to induction of regular grammars from positive and negative examples. Our model is fully explainable, its intermediate results are directly interpretable as partial parses, and it can be used to learn arbitrary regular grammars when provided with sufficient data. We find that our method consistently attains high recall and precision scores across a range of tests of varying complexity.

Tasks

Reproductions