SOTAVerified

Persian Natural Language Inference: A Meta-learning approach

2022-05-18COLING 2022Code Available0· sign in to hype

Heydar Soudani, Mohammad Hassan Mojab, Hamid Beigy

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Incorporating information from other languages can improve the results of tasks in low-resource languages. A powerful method of building functional natural language processing systems for low-resource languages is to combine multilingual pre-trained representations with cross-lingual transfer learning. In general, however, shared representations are learned separately, either across tasks or across languages. This paper proposes a meta-learning approach for inferring natural language in Persian. Alternately, meta-learning uses different task information (such as QA in Persian) or other language information (such as natural language inference in English). Also, we investigate the role of task augmentation strategy for forming additional high-quality tasks. We evaluate the proposed method using four languages and an auxiliary task. Compared to the baseline approach, the proposed model consistently outperforms it, improving accuracy by roughly six percent. We also examine the effect of finding appropriate initial parameters using zero-shot evaluation and CCA similarity.

Tasks

Reproductions