SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 10311040 of 4002 papers

TitleStatusHype
Beyond Offline Mapping: Learning Cross-lingual Word Embeddings through Context Anchoring0
UMUTeam at SemEval-2021 Task 7: Detecting and Rating Humor and Offense with Linguistic Features and Word EmbeddingsCode0
Lifelong Learning of Topics and Domain-Specific Word EmbeddingsCode0
Arabic aspect sentiment polarity classification using BERT0
Language Models as Zero-shot Visual Semantic Learners0
Stress Test Evaluation of Biomedical Word EmbeddingsCode0
Theoretical foundations and limits of word embeddings: what types of meaning can they capture?0
Debiasing Multilingual Word Embeddings: A Case Study of Three Indian LanguagesCode0
Improved Text Classification via Contrastive Adversarial Training0
Using Adversarial Debiasing to Remove Bias from Word Embeddings0
Show:102550
← PrevPage 104 of 401Next →

No leaderboard results yet.