SOTAVerified

Lexicon based Fine-tuning of Multilingual Language Models for Sentiment Analysis of Low-resource Languages

2022-01-16ACL ARR January 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Massively multilingual language models (MMLM) such as mBERT and XLM-R have shown good cross-lingual transferability. However, they are not specifically trained to capture cross-lingual signals with respect to sentiment words. In this paper, we use a sentiment lexicon of a high-resource language in order to generate an intermediate fine-tuning task for the MMLM, when fine-tuning it for a low-resource sentiment classification task. We show that such a fine-tuning task improves the mapping between similar sentiment words in different languages and improves the sentiment classification task of the low-resource language.

Tasks

Reproductions