SOTAVerified

Audio-Visual Understanding of Passenger Intents for In-Cabin Conversational Agents

2020-07-08WS 2020Unverified0· sign in to hype

Eda Okur, Shachi H. Kumar, Saurav Sahay, Lama Nachman

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Building multimodal dialogue understanding capabilities situated in the in-cabin context is crucial to enhance passenger comfort in autonomous vehicle (AV) interaction systems. To this end, understanding passenger intents from spoken interactions and vehicle vision systems is a crucial component for developing contextual and visually grounded conversational agents for AV. Towards this goal, we explore AMIE (Automated-vehicle Multimodal In-cabin Experience), the in-cabin agent responsible for handling multimodal passenger-vehicle interactions. In this work, we discuss the benefits of a multimodal understanding of in-cabin utterances by incorporating verbal/language input together with the non-verbal/acoustic and visual clues from inside and outside the vehicle. Our experimental results outperformed text-only baselines as we achieved improved performances for intent detection with a multimodal approach.

Tasks

Reproductions