SOTAVerified

Negative Training for Neural Dialogue Response Generation

2019-03-06ACL 2020Code Available0· sign in to hype

Tianxing He, James Glass

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Although deep learning models have brought tremendous advancements to the field of open-domain dialogue response generation, recent research results have revealed that the trained models have undesirable generation behaviors, such as malicious responses and generic (boring) responses. In this work, we propose a framework named "Negative Training" to minimize such behaviors. Given a trained model, the framework will first find generated samples that exhibit the undesirable behavior, and then use them to feed negative training signals for fine-tuning the model. Our experiments show that negative training can significantly reduce the hit rate of malicious responses, or discourage frequent responses and improve response diversity.

Tasks

Reproductions