SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 24112420 of 4002 papers

TitleStatusHype
The Limits of Word Level Differential Privacy0
The LMU Munich Unsupervised Machine Translation Systems0
The LMU Munich Unsupervised Machine Translation System for WMT190
The Making of the Royal Society Corpus0
Theoretical foundations and limits of word embeddings: what types of meaning can they capture?0
The performance evaluation of Multi-representation in the Deep Learning models for Relation Extraction Task0
The Role of Context Types and Dimensionality in Learning Word Embeddings0
The Role of Protected Class Word Lists in Bias Identification of Contextualized Word Representations0
The RWTH Aachen University English-German and German-English Unsupervised Neural Machine Translation Systems for WMT 20180
The SAME score: Improved cosine based bias score for word embeddings0
Show:102550
← PrevPage 242 of 401Next →

No leaderboard results yet.