SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 37013710 of 4002 papers

TitleStatusHype
Do Not Harm Protected Groups in Debiasing Language Representation Models0
Do not neglect related languages: The case of low-resource Occitan cross-lingual word embeddings0
Don't Forget Cheap Training Signals Before Building Unsupervised Bilingual Word Embeddings0
Don’t Forget Cheap Training Signals Before Building Unsupervised Bilingual Word Embeddings0
Don’t Forget Cheap Training Signals Before Building Unsupervised Bilingual Word Embeddings0
Do Nuclear Submarines Have Nuclear Captains? A Challenge Dataset for Commonsense Reasoning over Adjectives and Objects0
DoTheMath at SemEval-2020 Task 12 : Deep Neural Networks with Self Attention for Arabic Offensive Language Detection0
Do We Need Neural Models to Explain Human Judgments of Acceptability?0
The Influence of Down-Sampling Strategies on SVD Word Embedding Stability0
Do Word Embeddings Really Understand Loughran-McDonald's Polarities?0
Show:102550
← PrevPage 371 of 401Next →

No leaderboard results yet.