SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 39113920 of 4002 papers

TitleStatusHype
C-3MA: Tartu-Riga-Zurich Translation Systems for WMT17Code0
Modeling Input Uncertainty in Neural Network Dependency ParsingCode0
HOTTER: Hierarchical Optimal Topic Transport with Explanatory Context RepresentationsCode0
How Abstract Is Linguistic Generalization in Large Language Models? Experiments with Argument StructureCode0
How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 EmbeddingsCode0
Unequal Representations: Analyzing Intersectional Biases in Word Embeddings Using Representational Similarity AnalysisCode0
Analytical Methods for Interpretable Ultradense Word EmbeddingsCode0
RAPO: An Adaptive Ranking Paradigm for Bilingual Lexicon InductionCode0
How does BERT capture semantics? A closer look at polysemous wordsCode0
How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages?Code0
Show:102550
← PrevPage 392 of 401Next →

No leaderboard results yet.