SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 651660 of 4002 papers

TitleStatusHype
Knowing Where and What: Unified Word Block Pretraining for Document UnderstandingCode0
SoundChoice: Grapheme-to-Phoneme Models with Semantic Disambiguation0
Exploring Wasserstein Distance across Concept Embeddings for Ontology Matching0
Stroke-Based Autoencoders: Self-Supervised Learners for Efficient Zero-Shot Chinese Character Recognition0
A Context-Sensitive Word Embedding Approach for The Detection of Troll Tweets0
A methodology to characterize bias and harmful stereotypes in natural language processing in Latin AmericaCode0
Myers-Briggs personality classification from social media text using pre-trained language models0
Team Stanford ACMLab at SemEval 2022 Task 4: Textual Analysis of PCL Using Contextual Word Embeddings0
A Comparative Study on Word Embeddings and Social NLP TasksCode0
Interpreting Emoji with Emoji0
Show:102550
← PrevPage 66 of 401Next →

No leaderboard results yet.