SOTAVerified

Pretrained Ensemble Learning for Fine-Grained Propaganda Detection

2019-11-01WS 2019Unverified0· sign in to hype

Ali Fadel, Ibraheem Tuffaha, Mahmoud Al-Ayyoub

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we describe our team's effort on the fine-grained propaganda detection on sentence level classification (SLC) task of NLP4IF 2019 workshop co-located with the EMNLP-IJCNLP 2019 conference. Our top performing system results come from applying ensemble average on three pretrained models to make their predictions. The first two models use the uncased and cased versions of Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) while the third model uses Universal Sentence Encoder (USE) (Cer et al. 2018). Out of 26 participating teams, our system is ranked in the first place with 68.8312 F1-score on the development dataset and in the sixth place with 61.3870 F1-score on the testing dataset.

Tasks

Reproductions