SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 27312740 of 4002 papers

TitleStatusHype
Why is unsupervised alignment of English embeddings from different algorithms so hard?0
Why Overfitting Isn't Always Bad: Retrofitting Cross-Lingual Word Embeddings to Dictionaries0
Why PairDiff works? -- A Mathematical Analysis of Bilinear Relational Compositional Operators for Analogy Detection0
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations0
“Wikily” Supervised Neural Translation Tailored to Cross-Lingual Tasks0
Wild Devs' at SemEval-2017 Task 2: Using Neural Networks to Discover Word Similarity0
With More Contexts Comes Better Performance: Contextualized Sense Embeddings for All-Round Word Sense Disambiguation0
WLV-RIT at HASOC-Dravidian-CodeMix-FIRE2020: Offensive Language Identification in Code-switched YouTube Comments0
WMDO: Fluency-based Word Mover's Distance for Machine Translation Evaluation0
WOLVESAAR at SemEval-2016 Task 1: Replicating the Success of Monolingual Word Alignment and Neural Embeddings for Semantic Textual Similarity0
Show:102550
← PrevPage 274 of 401Next →

No leaderboard results yet.