SOTAVerified

Exploring Pretraining via Active Forgetting for Improving Cross Lingual Transfer for Decoder Language Models

2024-10-21Unverified0· sign in to hype

Divyanshu Aggarwal, Ashutosh Sathe, Sunayana Sitaram

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Large Language Models (LLMs) demonstrate exceptional capabilities in a multitude of NLP tasks. However, the efficacy of such models to languages other than English is often limited. Prior works have shown that encoder-only models such as BERT or XLM-RoBERTa show impressive cross lingual transfer of their capabilities from English to other languages. In this work, we propose a pretraining strategy that uses active forgetting to achieve similar cross lingual transfer in decoder-only LLMs. We show that LLMs pretrained with active forgetting are highly effective when adapting to new and unseen languages. Through extensive experimentation, we find that LLMs pretrained with active forgetting are able to learn better multilingual representations which translates to better performance in many downstream tasks.

Tasks

Reproductions