SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 17511760 of 4002 papers

TitleStatusHype
Contextualized Embeddings for Connective Disambiguation in Shallow Discourse Parsing0
Analysis of Italian Word Embeddings0
Contextualized Embeddings for Enriching Linguistic Analyses on Politeness0
Learning to Negate Adjectives with Bilinear Models0
How to represent a word and predict it, too: Improving tied architectures for language modelling0
Contextualized moral inference0
How Well Can We Predict Hypernyms from Word Embeddings? A Dataset-Centric Analysis0
Human-in-the-Loop Refinement of Word Embeddings0
HumorHawk at SemEval-2017 Task 6: Mixing Meaning and Sound for Humor Recognition0
Des pseudo-sens pour am\'eliorer l'extraction de synonymes \`a partir de plongements lexicaux (Pseudo-senses for improving the extraction of synonyms from word embeddings)0
Show:102550
← PrevPage 176 of 401Next →

No leaderboard results yet.