SOTAVerified

Attention Modulation for Zero-Shot Cross-Domain Dialogue State Tracking

2022-10-01COLING (CODI, CRAC) 2022Code Available0· sign in to hype

Mathilde Veron, Olivier Galibert, Guillaume Bernard, Sophie Rosset

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Dialog state tracking (DST) is a core step for task-oriented dialogue systems aiming to track the user’s current goal during a dialogue. Recently a special focus has been put on applying existing DST models to new domains, in other words performing zero-shot cross-domain transfer. While recent state-of-the-art models leverage large pre-trained language models, no work has been made on understanding and improving the results of first developed zero-shot models like SUMBT. In this paper, we thus propose to improve SUMBT zero-shot results on MultiWOZ by using attention modulation during inference. This method improves SUMBT zero-shot results significantly on two domains and does not worsen the initial performance with the great advantage of needing no additional training.

Tasks

Reproductions