LEATHER: A Framework for Learning to Generate Human-like Text in Dialogue
Anthony Sicilia, Malihe Alikhani
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/anthonysicilia/leather-aacl2022OfficialIn paperpytorch★ 3
- github.com/anthonysicilia/change-that-matters-acl2022pytorch★ 6
- github.com/anthonysicilia/pacbayes-adaptation-uai2022pytorch★ 4
Abstract
Algorithms for text-generation in dialogue can be misguided. For example, in task-oriented settings, reinforcement learning that optimizes only task-success can lead to abysmal lexical diversity. We hypothesize this is due to poor theoretical understanding of the objectives in text-generation and their relation to the learning process (i.e., model training). To this end, we propose a new theoretical framework for learning to generate text in dialogue. Compared to existing theories of learning, our framework allows for analysis of the multi-faceted goals inherent to text-generation. We use our framework to develop theoretical guarantees for learners that adapt to unseen data. As an example, we apply our theory to study data-shift within a cooperative learning algorithm proposed for the GuessWhat?! visual dialogue game. From this insight, we propose a new algorithm, and empirically, we demonstrate our proposal improves both task-success and human-likeness of the generated text. Finally, we show statistics from our theory are empirically predictive of multiple qualities of the generated dialogue, suggesting our theory is useful for model-selection when human evaluations are not available.