SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 12411250 of 4002 papers

TitleStatusHype
A Correspondence Variational Autoencoder for Unsupervised Acoustic Word Embeddings0
On Extending NLP Techniques from the Categorical to the Latent Space: KL Divergence, Zipf's Law, and Similarity SearchCode0
SChME at SemEval-2020 Task 1: A Model Ensemble for Detecting Lexical Semantic ChangeCode0
A Computational Approach to Measuring the Semantic Divergence of Cognates0
Automatic Word Association Norms (AWAN)0
“Shakespeare in the Vectorian Age” – An evaluation of different word embeddings and NLP parameters for the detection of Shakespeare quotes0
Automatic Learning of Modality Exclusivity Norms with Crosslingual Word Embeddings0
Neural Networks approaches focused on French Spoken Language Understanding: application to the MEDIA Evaluation TaskCode0
DCC-Uchile at SemEval-2020 Task 1: Temporal Referencing Word Embeddings0
Joint Training for Learning Cross-lingual Embeddings with Sub-word Information without Parallel Corpora0
Show:102550
← PrevPage 125 of 401Next →

No leaderboard results yet.