SOTAVerified

FiSSA at SemEval-2020 Task 9: Fine-tuned For Feelings

2020-07-24SEMEVALCode Available0· sign in to hype

Bertelt Braaksma, Richard Scholtens, Stan van Suijlekom, Remy Wang, Ahmet Üstün

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we present our approach for sentiment classification on Spanish-English code-mixed social media data in the SemEval-2020 Task 9. We investigate performance of various pre-trained Transformer models by using different fine-tuning strategies. We explore both monolingual and multilingual models with the standard fine-tuning method. Additionally, we propose a custom model that we fine-tune in two steps: once with a language modeling objective, and once with a task-specific objective. Although two-step fine-tuning improves sentiment classification performance over the base model, the large multilingual XLM-RoBERTa model achieves best weighted F1-score with 0.537 on development data and 0.739 on test data. With this score, our team jupitter placed tenth overall in the competition.

Tasks

Reproductions