BERT for Joint Intent Classification and Slot Filling
Qian Chen, Zhu Zhuo, Wen Wang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/alibaba-damo-academy/spokennlptf★ 125
- github.com/MahmoudWahdan/dialog-nlutf★ 100
- github.com/VinAIResearch/JointIDSFpytorch★ 88
- github.com/dsindex/iclassifierpytorch★ 44
- github.com/sxjscience/GluonNLP-Slot-Fillingmxnet★ 0
- github.com/zhoucz97/JointBERT-paddlepaddle★ 0
- github.com/mangushev/intent_slottf★ 0
- github.com/Polly42Rose/SiriusIntentPredictionSlotFillingpytorch★ 0
- github.com/sonos/svc-demographic-bias-assessmentnone★ 0
- github.com/yinghao1019/Joint_learnpytorch★ 0
Abstract
Intent classification and slot filling are two essential tasks for natural language understanding. They often suffer from small-scale human-labeled training data, resulting in poor generalization capability, especially for rare words. Recently a new language representation model, BERT (Bidirectional Encoder Representations from Transformers), facilitates pre-training deep bidirectional representations on large-scale unlabeled corpora, and has created state-of-the-art models for a wide variety of natural language processing tasks after simple fine-tuning. However, there has not been much effort on exploring BERT for natural language understanding. In this work, we propose a joint intent classification and slot filling model based on BERT. Experimental results demonstrate that our proposed model achieves significant improvement on intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy on several public benchmark datasets, compared to the attention-based recurrent neural network models and slot-gated models.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ATIS | Joint BERT + CRF | Accuracy | 97.9 | — | Unverified |
| ATIS | Joint BERT | Accuracy | 97.5 | — | Unverified |