SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 451460 of 4002 papers

TitleStatusHype
Gender bias in (non)-contextual clinical word embeddings for stereotypical medical categories0
Massively Multilingual Lexical Specialization of Multilingual Transformers0
Knowing Where and What: Unified Word Block Pretraining for Document UnderstandingCode0
SoundChoice: Grapheme-to-Phoneme Models with Semantic Disambiguation0
Exploring Wasserstein Distance across Concept Embeddings for Ontology Matching0
Stroke-Based Autoencoders: Self-Supervised Learners for Efficient Zero-Shot Chinese Character Recognition0
A Context-Sensitive Word Embedding Approach for The Detection of Troll Tweets0
A methodology to characterize bias and harmful stereotypes in natural language processing in Latin AmericaCode0
Myers-Briggs personality classification from social media text using pre-trained language models0
TurkishDelightNLP: A Neural Turkish NLP ToolkitCode0
Show:102550
← PrevPage 46 of 401Next →

No leaderboard results yet.