Misspelling Oblivious Word Embeddings
Bora Edizel, Aleksandra Piktus, Piotr Bojanowski, Rui Ferreira, Edouard Grave, Fabrizio Silvestri
Code Available — Be the first to reproduce this paper.
ReproduceCode
- bitbucket.org/bedizel/moeOfficialIn papernone★ 0
- github.com/facebookresearch/moenone★ 0
- github.com/dleemiller/string-noisenone★ 0
Abstract
In this paper we present a method to learn word embeddings that are resilient to misspellings. Existing word embeddings have limited applicability to malformed texts, which contain a non-negligible amount of out-of-vocabulary words. We propose a method combining FastText with subwords and a supervised task of learning misspelling patterns. In our method, misspellings of each word are embedded close to their correct variants. We train these embeddings on a new dataset we are releasing publicly. Finally, we experimentally show the advantages of this approach on both intrinsic and extrinsic NLP tasks using public test sets.