SOTAVerified

Transformer-based Multi-Task Learning for Adverse Effect Mention Analysis in Tweets

2021-06-01NAACL (SMM4H) 2021Unverified0· sign in to hype

George-Andrei Dima, Dumitru-Clementin Cercel, Mihai Dascalu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper presents our contribution to the Social Media Mining for Health Applications Shared Task 2021. We addressed all the three subtasks of Task 1: Subtask A (classification of tweets containing adverse effects), Subtask B (extraction of text spans containing adverse effects) and Subtask C (adverse effects resolution). We explored various pre-trained transformer-based language models and we focused on a multi-task training architecture. For the first subtask, we also applied adversarial augmentation techniques and we formed model ensembles in order to improve the robustness of the prediction. Our system ranked first at Subtask B with 0.51 F1 score, 0.514 precision and 0.514 recall. For Subtask A we obtained 0.44 F1 score, 0.49 precision and 0.39 recall and for Subtask C we obtained 0.16 F1 score with 0.16 precision and 0.17 recall.

Tasks

Reproductions