SOTAVerified

Multi-Modal Open-Domain Dialogue

2020-10-02EMNLP 2021Unverified0· sign in to hype

Kurt Shuster, Eric Michael Smith, Da Ju, Jason Weston

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recent work in open-domain conversational agents has demonstrated that significant improvements in model engagingness and humanness metrics can be achieved via massive scaling in both pre-training data and model size (Adiwardana et al., 2020; Roller et al., 2020). However, if we want to build agents with human-like abilities, we must expand beyond handling just text. A particularly important topic is the ability to see images and communicate about what is perceived. With the goal of engaging humans in multi-modal dialogue, we investigate combining components from state-of-the-art open-domain dialogue agents with those from state-of-the-art vision models. We study incorporating different image fusion schemes and domain-adaptive pre-training and fine-tuning strategies, and show that our best resulting model outperforms strong existing models in multi-modal dialogue while simultaneously performing as well as its predecessor (text-only) BlenderBot (Roller et al., 2020) in text-based conversation. We additionally investigate and incorporate safety components in our final model, and show that such efforts do not diminish model performance with respect to engagingness metrics.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
BlendedSkillTalkMulti-Modal BlenderBotBLEU-41Unverified
ConvAI2Multi-Modal BlenderBotBLEU-41.1Unverified
EmpatheticDialoguesMulti-Modal BlenderBotBLEU-41.5Unverified
Image-ChatMulti-Modal BlenderBotBLEU-440Unverified
Wizard of WikipediaMulti-Modal BlenderBotBLEU-42.2Unverified

Reproductions