SOTAVerified

Pay ``Attention'' to your Context when Classifying Abusive Language

2019-08-01WS 2019Code Available0· sign in to hype

Tuhin Chakrabarty, Kilol Gupta, Smar Muresan, a

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The goal of any social media platform is to facilitate healthy and meaningful interactions among its users. But more often than not, it has been found that it becomes an avenue for wanton attacks. We propose an experimental study that has three aims: 1) to provide us with a deeper understanding of current data sets that focus on different types of abusive language, which are sometimes overlapping (racism, sexism, hate speech, offensive language, and personal attacks); 2) to investigate what type of attention mechanism (contextual vs. self-attention) is better for abusive language detection using deep learning architectures; and 3) to investigate whether stacked architectures provide an advantage over simple architectures for this task.

Tasks

Reproductions