SOTAVerified

Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020

2020-05-01LREC 2020Code Available0· sign in to hype

Sudhanshu Mishra, Shivangi Prasad, Shubhanshu Mishra

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present our team `3Idiots' (referred as `sdhanshu' in the official rankings) approach for the Trolling, Aggression and Cyberbullying (TRAC) 2020 shared tasks. Our approach relies on fine-tuning various Transformer models on the different datasets. We also investigated the utility of task label marginalization, joint label classification, and joint training on multilingual datasets as possible improvements to our models. Our team came second in English sub-task A, a close fourth in the English sub-task B and third in the remaining 4 sub-tasks. We find the multilingual joint training approach to be the best trade-off between computational efficiency of model deployment and model's evaluation performance. We open source our approach at https://github.com/socialmediaie/TRAC2020.

Tasks

Reproductions