SOTAVerified

PAW at SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation : Exploring Cross Lingual Transfer, Augmentations and Adversarial Training

2021-08-01SEMEVALUnverified0· sign in to hype

Harsh Goyal, Aadarsh Singh, Priyanshu Kumar

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We experiment with XLM RoBERTa for Word in Context Disambiguation in the Multi Lingual and Cross Lingual setting so as to develop a single model having knowledge about both settings. We solve the problem as a binary classification problem and also experiment with data augmentation and adversarial training techniques. In addition, we also experiment with a 2-stage training technique. Our approaches prove to be beneficial for better performance and robustness.

Tasks

Reproductions