SOTAVerified

Bag of Experts Architectures for Model Reuse in Conversational Language Understanding

2018-06-01NAACL 2018Unverified0· sign in to hype

Rahul Jha, Alex Marin, Suvamsh Shivaprasad, Imed Zitouni

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Slot tagging, the task of detecting entities in input user utterances, is a key component of natural language understanding systems for personal digital assistants. Since each new domain requires a different set of slots, the annotation costs for labeling data for training slot tagging models increases rapidly as the number of domains grow. To tackle this, we describe Bag of Experts (BoE) architectures for model reuse for both LSTM and CRF based models. Extensive experimentation over a dataset of 10 domains drawn from data relevant to our commercial personal digital assistant shows that our BoE models outperform the baseline models with a statistically significant average margin of 5.06\% in absolute F1-score when training with 2000 instances per domain, and achieve an even higher improvement of 12.16\% when only 25\% of the training data is used.

Tasks

Reproductions