SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 841850 of 4002 papers

TitleStatusHype
Unsupervised Matching of Data and TextCode0
Identification of Biased Terms in News Articles by Comparison of Outlet-specific Word Embeddings0
Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information PreservingCode0
Combining Textual Features for the Detection of Hateful and Offensive LanguageCode0
Few-Shot NLU with Vector Projection Distance and Abstract Triangular CRF0
Emotion-Cause Pair Extraction in Customer Reviews0
BERTMap: A BERT-based Ontology Alignment System0
Inferring Prototypes for Multi-Label Few-Shot Image Classification with Word Vector Guided Attention0
Sdutta at ComMA@ICON: A CNN-LSTM Model for Hate Detection0
Retrofitting of Pre-trained Emotion Words with VAD-dimensions and the Plutchik Emotions0
Show:102550
← PrevPage 85 of 401Next →

No leaderboard results yet.