SOTAVerified

Deciding Whether to Ask Clarifying Questions in Large-Scale Spoken Language Understanding

2021-09-25Unverified0· sign in to hype

Joo-Kyung Kim, Guoyin Wang, Sungjin Lee, Young-Bum Kim

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

A large-scale conversational agent can suffer from understanding user utterances with various ambiguities such as ASR ambiguity, intent ambiguity, and hypothesis ambiguity. When ambiguities are detected, the agent should engage in a clarifying dialog to resolve the ambiguities before committing to actions. However, asking clarifying questions for all the ambiguity occurrences could lead to asking too many questions, essentially hampering the user experience. To trigger clarifying questions only when necessary for the user satisfaction, we propose a neural self-attentive model that leverages the hypotheses with ambiguities and contextual signals. We conduct extensive experiments on five common ambiguity types using real data from a large-scale commercial conversational agent and demonstrate significant improvement over a set of baseline approaches.

Tasks

Reproductions