Re-framing Incremental Deep Language Models for Dialogue Processing with Multi-task Learning
2020-11-13COLING 2020Code Available0· sign in to hype
Morteza Rohanian, Julian Hough
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/mortezaro/mtl-disfluency-detectionOfficialIn papertf★ 1
Abstract
We present a multi-task learning framework to enable the training of one universal incremental dialogue processing model with four tasks of disfluency detection, language modelling, part-of-speech tagging, and utterance segmentation in a simple deep recurrent setting. We show that these tasks provide positive inductive biases to each other with the optimal contribution of each one relying on the severity of the noise from the task. Our live multi-task model outperforms similar individual tasks, delivers competitive performance, and is beneficial for future use in conversational agents in psychiatric treatment.