Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models
Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, Joelle Pineau
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/michaelfarrell76/End-To-End-Generative-Dialoguetorch★ 138
- github.com/vamshi009/End-to-End-HRED-Dialogue-Systempytorch★ 0
- github.com/Tanasho0928/chat-orientedpytorch★ 0
- github.com/hsgodhia/hredpytorch★ 0
- github.com/wayalhruhi/juliansernone★ 0
- github.com/Tanasho0928/ncmpytorch★ 0
- github.com/julianser/hed-dlg-truncatednone★ 0
Abstract
We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.