Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots
Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, Xiaodan Zhu
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/JasonForJoy/SA-BERTOfficialIn papertf★ 75
- github.com/JasonForJoy/BERT-for-Response-Selectiontf★ 75
Abstract
In this paper, we study the problem of employing pre-trained language models for multi-turn response selection in retrieval-based chatbots. A new model, named Speaker-Aware BERT (SA-BERT), is proposed in order to make the model aware of the speaker change information, which is an important and intrinsic property of multi-turn dialogues. Furthermore, a speaker-aware disentanglement strategy is proposed to tackle the entangled dialogues. This strategy selects a small number of most important utterances as the filtered context according to the speakers' information in them. Finally, domain adaptation is performed to incorporate the in-domain knowledge into pre-trained language models. Experiments on five public datasets show that our proposed model outperforms the present models on all metrics by large margins and achieves new state-of-the-art performances for multi-turn response selection.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Douban | SA-BERT | MAP | 0.62 | — | Unverified |
| E-commerce | SA-BERT | R10@1 | 0.7 | — | Unverified |
| RRS | SA-BERT+BERT-FP | MAP | 0.7 | — | Unverified |
| RRS Ranking Test | SA-BERT+BERT-FP | NDCG@3 | 0.67 | — | Unverified |
| Ubuntu Dialogue (v1, Ranking) | SA-BERT | R10@1 | 0.86 | — | Unverified |
| Ubuntu IRC | SA-BERT | Accuracy | 60.42 | — | Unverified |