SOTAVerified

Training for Gibbs Sampling on Conditional Random Fields with Neural Scoring Factors

2020-11-01EMNLP 2020Code Available0· sign in to hype

Sida Gao, Matthew R. Gormley

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Most recent improvements in NLP come from changes to the neural network architectures modeling the text input. Yet, state-of-the-art models often rely on simple approaches to model the label space, e.g. bigram Conditional Random Fields (CRFs) in sequence tagging. More expressive graphical models are rarely used due to their prohibitive computational cost. In this work, we present an approach for efficiently training and decoding hybrids of graphical models and neural networks based on Gibbs sampling. Our approach is the natural adaptation of SampleRank (Wick et al., 2011) to neural models, and is widely applicable to tasks beyond sequence tagging. We apply our approach to named entity recognition and present a neural skip-chain CRF model, for which exact inference is impractical. The skip-chain model improves over a strong baseline on three languages from CoNLL-02/03. We obtain new state-of-the-art results on Dutch.

Tasks

Reproductions