SOTAVerified

Knowledge-Grounded Dialogue Generation with Pre-trained Language Models

2020-10-17EMNLP 2020Code Available1· sign in to hype

Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, Rui Yan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We study knowledge-grounded dialogue generation with pre-trained language models. To leverage the redundant external knowledge under capacity constraint, we propose equipping response generation defined by a pre-trained language model with a knowledge selection module, and an unsupervised approach to jointly optimizing knowledge selection and response generation with unlabeled dialogues. Empirical results on two benchmarks indicate that our model can significantly outperform state-of-the-art methods in both automatic evaluation and human judgment.

Tasks

Reproductions