SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 11011110 of 4002 papers

TitleStatusHype
SemGloVe: Semantic Co-occurrences for GloVe from BERTCode0
Corrected CBOW Performs as well as Skip-gramCode1
Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings0
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence LearningCode1
DeepHateExplainer: Explainable Hate Speech Detection in Under-resourced Bengali LanguageCode0
WEmbSim: A Simple yet Effective Metric for Image Captioning0
Improved Biomedical Word Embeddings in the Transformer EraCode0
BERT Goes Shopping: Comparing Distributional Models for Product RepresentationsCode1
Keyword-Guided Neural Conversational ModelCode1
Model Choices Influence Attributive Word Associations: A Semi-supervised Analysis of Static Word Embeddings0
Show:102550
← PrevPage 111 of 401Next →

No leaderboard results yet.