SOTAVerified

Multi-Modal Emotion recognition on IEMOCAP Dataset using Deep Learning

2018-04-16Code Available0· sign in to hype

Samarth Tripathi, Sarthak Tripathi, Homayoon Beigi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Emotion recognition has become an important field of research in Human Computer Interactions as we improve upon the techniques for modelling the various aspects of behaviour. With the advancement of technology our understanding of emotions are advancing, there is a growing need for automatic emotion recognition systems. One of the directions the research is heading is the use of Neural Networks which are adept at estimating complex functions that depend on a large number and diverse source of input data. In this paper we attempt to exploit this effectiveness of Neural networks to enable us to perform multimodal Emotion recognition on IEMOCAP dataset using data from Speech, Text, and Motion capture data from face expressions, rotation and hand movements. Prior research has concentrated on Emotion detection from Speech on the IEMOCAP dataset, but our approach is the first that uses the multiple modes of data offered by IEMOCAP for a more robust and accurate emotion detection.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Expressive hands and faces dataset (EHF).SMPLify-Xv2v error52.9Unverified

Reproductions