SOTAVerified

Word Embeddings for Code-Mixed Language Processing

2018-10-01EMNLP 2018Unverified0· sign in to hype

Adithya Pratapa, Monojit Choudhury, Sunayana Sitaram

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We compare three existing bilingual word embedding approaches, and a novel approach of training skip-grams on synthetic code-mixed text generated through linguistic models of code-mixing, on two tasks - sentiment analysis and POS tagging for code-mixed text. Our results show that while CVM and CCA based embeddings perform as well as the proposed embedding technique on semantic and syntactic tasks respectively, the proposed approach provides the best performance for both tasks overall. Thus, this study demonstrates that existing bilingual embedding techniques are not ideal for code-mixed text processing and there is a need for learning multilingual word embedding from the code-mixed text.

Tasks

Reproductions