SOTAVerified

PLATO-KAG: Unsupervised Knowledge-Grounded Conversation via Joint Modeling

2021-11-01EMNLP (NLP4ConvAI) 2021Unverified0· sign in to hype

Xinxian Huang, Huang He, Siqi Bao, Fan Wang, Hua Wu, Haifeng Wang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Large-scale conversation models are turning to leveraging external knowledge to improve the factual accuracy in response generation. Considering the infeasibility to annotate the external knowledge for large-scale dialogue corpora, it is desirable to learn the knowledge selection and response generation in an unsupervised manner. In this paper, we propose PLATO-KAG (Knowledge-Augmented Generation), an unsupervised learning approach for end-to-end knowledge-grounded conversation modeling. For each dialogue context, the top-k relevant knowledge elements are selected and then employed in knowledge-grounded response generation. The two components of knowledge selection and response generation are optimized jointly and effectively under a balanced objective. Experimental results on two publicly available datasets validate the superiority of PLATO-KAG.

Tasks

Reproductions