SOTAVerified

Classification of Multimodal Hate Speech -- The Winning Solution of Hateful Memes Challenge

2020-12-02Unverified0· sign in to hype

Xiayu Zhong

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Hateful Memes is a new challenge set for multimodal classification, focusing on detecting hate speech in multimodal memes. Difficult examples are added to the dataset to make it hard to rely on unimodal signals, which means only multimodal models can succeed. According to Kiela,the state-of-the-art methods perform poorly compared to humans (64.73% vs. 84.7% accuracy) on Hateful Memes. I propose a new model that combined multimodal with rules, which achieve the first ranking of accuracy and AUROC of 86.8% and 0.923 respectively. These rules are extracted from training set, and focus on improving the classification accuracy of difficult samples.

Tasks

Reproductions