SOTAVerified

Comparative Studies of Detecting Abusive Language on Twitter

2018-08-30WS 2018Code Available1· sign in to hype

Younghun Lee, Seunghyun Yoon, Kyomin Jung

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The context-dependent nature of online aggression makes annotating large collections of data extremely difficult. Previously studied datasets in abusive language detection have been insufficient in size to efficiently train deep learning models. Recently, Hate and Abusive Speech on Twitter, a dataset much greater in size and reliability, has been released. However, this dataset has not been comprehensively studied to its potential. In this paper, we conduct the first comparative study of various learning models on Hate and Abusive Speech on Twitter, and discuss the possibility of using additional features and context data for improvements. Experimental results show that bidirectional GRU networks trained on word-level features, with Latent Topic Clustering modules, is the most accurate model scoring 0.805 F1.

Tasks

Reproductions