SOTAVerified

Explain Thyself Bully: Sentiment Aided Cyberbullying Detection with Explanation

2024-01-17Code Available0· sign in to hype

Krishanu Maity, Prince Jha, Raghav Jain, Sriparna Saha, Pushpak Bhattacharyya

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Cyberbullying has become a big issue with the popularity of different social media networks and online communication apps. While plenty of research is going on to develop better models for cyberbullying detection in monolingual language, there is very little research on the code-mixed languages and explainability aspect of cyberbullying. Recent laws like "right to explanations" of General Data Protection Regulation, have spurred research in developing interpretable models rather than focusing on performance. Motivated by this we develop the first interpretable multi-task model called mExCB for automatic cyberbullying detection from code-mixed languages which can simultaneously solve several tasks, cyberbullying detection, explanation/rationale identification, target group detection and sentiment analysis. We have introduced BullyExplain, the first benchmark dataset for explainable cyberbullying detection in code-mixed language. Each post in BullyExplain dataset is annotated with four labels, i.e., bully label, sentiment label, target and rationales (explainability), i.e., which phrases are being responsible for annotating the post as a bully. The proposed multitask framework (mExCB) based on CNN and GRU with word and sub-sentence (SS) level attention is able to outperform several baselines and state of the art models when applied on BullyExplain dataset.

Tasks

Reproductions