SOTAVerified

MONAH: Multi-Modal Narratives for Humans to analyze conversations

2021-01-18EACL 2021Code Available0· sign in to hype

Joshua Y. Kim, Greyson Y. Kim, Chunfeng Liu, Rafael A. Calvo, Silas C. R. Taylor, Kalina Yacef

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In conversational analyses, humans manually weave multimodal information into the transcripts, which is significantly time-consuming. We introduce a system that automatically expands the verbatim transcripts of video-recorded conversations using multimodal data streams. This system uses a set of preprocessing rules to weave multimodal annotations into the verbatim transcripts and promote interpretability. Our feature engineering contributions are two-fold: firstly, we identify the range of multimodal features relevant to detect rapport-building; secondly, we expand the range of multimodal annotations and show that the expansion leads to statistically significant improvements in detecting rapport-building.

Tasks

Reproductions