SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 25412550 of 4002 papers

TitleStatusHype
Understanding Neural Machine Translation by Simplification: The Case of Encoder-free Models0
Understanding the Stability of Medical Concept Embeddings0
Understanding the Source of Semantic Regularities in Word Embeddings0
Understanding Undesirable Word Embedding Associations0
UnibucKernel: A kernel-based learning method for complex word identification0
Uniform Discretized Integrated Gradients: An effective attribution based method for explaining large language models0
Unifying Bayesian Inference and Vector Space Models for Improved Decipherment0
UniMelb at SemEval-2016 Task 3: Identifying Similar Questions by combining a CNN with String Similarity Measures0
UniMelb at SemEval-2018 Task 12: Generative Implication using LSTMs, Siamese Networks and Semantic Representations with Synonym Fuzzing0
UniPI at SemEval-2016 Task 4: Convolutional Neural Networks for Sentiment Classification0
Show:102550
← PrevPage 255 of 401Next →

No leaderboard results yet.