Adversarial evaluation for open-domain dialogue generation
2017-08-01WS 2017Unverified0· sign in to hype
Elia Bruni, Raquel Fern{\'a}ndez
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We investigate the potential of adversarial evaluation methods for open-domain dialogue generation systems, comparing the performance of a discriminative agent to that of humans on the same task. Our results show that the task is hard, both for automated models and humans, but that a discriminative agent can learn patterns that lead to above-chance performance.