SOTAVerified

A Large-Scale Corpus of E-mail Conversations with Standard and Two-Level Dialogue Act Annotations

2020-12-01COLING 2020Unverified0· sign in to hype

Motoki Taniguchi, Yoshihiro Ueda, Tomoki Taniguchi, Tomoko Ohkuma

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present a large-scale corpus of e-mail conversations with domain-agnostic and two-level dialogue act (DA) annotations towards the goal of a better understanding of asynchronous conversations. We annotate over 6,000 messages and 35,000 sentences from more than 2,000 threads. For a domain-independent and application-independent DA annotations, we choose ISO standard 24617-2 as the annotation scheme. To assess the difficulty of DA recognition on our corpus, we evaluate several models, including a pre-trained contextual representation model, as our baselines. The experimental results show that BERT outperforms other neural network models, including previous state-of-the-art models, but falls short of a human performance. We also demonstrate that DA tags of two-level granularity enable a DA recognition model to learn efficiently by using multi-task learning. An evaluation of a model trained on our corpus against other domains of asynchronous conversation reveals the domain independence of our DA annotations.

Tasks

Reproductions