A Dynamic Speaker Model for Conversational Interactions
Hao Cheng, Hao Fang, Mari Ostendorf
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/hao-cheng/dynamic_speaker_modelOfficialIn papertf★ 0
Abstract
Individual differences in speakers are reflected in their language use as well as in their interests and opinions. Characterizing these differences can be useful in human-computer interaction, as well as analysis of human-human conversations. In this work, we introduce a neural model for learning a dynamically updated speaker embedding in a conversational context. Initial model training is unsupervised, using context-sensitive language generation as an objective, with the context being the conversation history. Further fine-tuning can leverage task-dependent supervised training. The learned neural representation of speakers is shown to be useful for content ranking in a socialbot and dialog act prediction in human-human conversations.