Transformer for Emotion Recognition
2018-05-03Code Available0· sign in to hype
Jean-Benoit Delbrouck
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/jbdel/OMG_UMONS_submissionOfficialIn papertf★ 0
Abstract
This paper describes the UMONS solution for the OMG-Emotion Challenge. We explore a context-dependent architecture where the arousal and valence of an utterance are predicted according to its surrounding context (i.e. the preceding and following utterances of the video). We report an improvement when taking into account context for both unimodal and multimodal predictions.