SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 551560 of 4002 papers

TitleStatusHype
Estimating word co-occurrence probabilities from pretrained static embeddings using a log-bilinear model0
Sense Embeddings are also Biased – Evaluating Social Biases in Static and Contextualised Sense Embeddings0
Caveats of Measuring Semantic Change of Cognates and Borrowings using Multilingual Word EmbeddingsCode0
SSNCSE_NLP@LT-EDI-ACL2022: Homophobia/Transphobia Detection in Multiple Languages using SVM Classifiers and BERT-based Transformers0
Evaluating Biomedical Word Embeddings for Vocabulary Alignment at Scale in the UMLS Metathesaurus Using Siamese Networks0
Roadblocks in Gender Bias Measurement for Diachronic CorporaCode0
Learning Bias-reduced Word Embeddings Using Dictionary DefinitionsCode1
Binary Encoded Word Mover’s Distance0
English-Malay Word Embeddings Alignment for Cross-lingual Emotion Classification with Hierarchical Attention Network0
“Vaderland”, “Volk” and “Natie”: Semantic Change Related to Nationalism in Dutch Literature Between 1700 and 1880 Captured with Dynamic Bernoulli Word Embeddings0
Show:102550
← PrevPage 56 of 401Next →

No leaderboard results yet.