SOTAVerified

Multi-Task Bidirectional Transformer Representations for Irony Detection

2019-09-08Unverified0· sign in to hype

Chiyu Zhang, Muhammad Abdul-Mageed

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Supervised deep learning requires large amounts of training data. In the context of the FIRE2019 Arabic irony detection shared task (IDAT@FIRE2019), we show how we mitigate this need by fine-tuning the pre-trained bidirectional encoders from transformers (BERT) on gold data in a multi-task setting. We further improve our models by by further pre-training BERT on `in-domain' data, thus alleviating an issue of dialect mismatch in the Google-released BERT model. Our best model acquires 82.4 macro F1 score, and has the unique advantage of being feature-engineering free (i.e., based exclusively on deep learning).

Tasks

Reproductions