SOTAVerified

A Multi-modal Personality Prediction System

2020-12-01ICON 2020Unverified0· sign in to hype

Chanchal Suman, Aditya Gupta, Sriparna Saha, Pushpak Bhattacharyya

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Automatic prediction of personality traits has many real-life applications, e.g., in forensics, recommender systems, personalized services etc.. In this work, we have proposed a solution framework for solving the problem of predicting the personality traits of a user from videos. Ambient, facial and the audio features are extracted from the video of the user. These features are used for the final output prediction. The visual and audio modalities are combined in two different ways: averaging of predictions obtained from the individual modalities, and concatenation of features in multi-modal setting. The dataset released in Chalearn-16 is used for evaluating the performance of the system. Experimental results illustrate that it is possible to obtain better performance with a hand full of images, rather than using all the images present in the video

Tasks

Reproductions