Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
2020-10-05EMNLP (nlpbt) 2020Code Available1· sign in to hype
Jean-Benoit Delbrouck, Noé Tits, Stéphane Dupont
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/jbdel/modulated_fusion_transformerOfficialIn paperpytorch★ 32
Abstract
This paper aims to bring a new lightweight yet powerful solution for the task of Emotion Recognition and Sentiment Analysis. Our motivation is to propose two architectures based on Transformers and modulation that combine the linguistic and acoustic inputs from a wide range of datasets to challenge, and sometimes surpass, the state-of-the-art in the field. To demonstrate the efficiency of our models, we carefully evaluate their performances on the IEMOCAP, MOSI, MOSEI and MELD dataset. The experiments can be directly replicated and the code is fully open for future researches.