SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 226250 of 4002 papers

TitleStatusHype
Towards Better Chinese-centric Neural Machine Translation for Low-resource LanguagesCode1
Tracing Origins: Coreference-aware Machine Reading ComprehensionCode1
Cross-Lingual Word Embedding Refinement by _1 Norm OptimisationCode1
Transition-based Semantic Dependency Parsing with Pointer NetworksCode1
TU Wien @ TREC Deep Learning '19 -- Simple Contextualization for Re-rankingCode1
Combining Static Word Embeddings and Contextual Representations for Bilingual Lexicon InductionCode1
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence LearningCode1
Understanding the Origins of Bias in Word EmbeddingsCode1
Universal Sentence EncoderCode1
Comparative Evaluation of Pretrained Transfer Learning Models on Automatic Short Answer GradingCode1
Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language ModelsCode1
Compass-aligned Distributional Embeddings for Studying Semantic Differences across CorporaCode1
Compositional Demographic Word EmbeddingsCode1
FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input RepresentationsCode1
CTRAN: CNN-Transformer-based Network for Natural Language UnderstandingCode1
Conditional probing: measuring usable information beyond a baselineCode1
Decoupled Textual Embeddings for Customized Image GenerationCode1
Visual Question Generation from Radiology ImagesCode1
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias MitigationCode1
Adversarial Training for Commonsense InferenceCode1
Context-aware Feature Generation for Zero-shot Semantic SegmentationCode1
Hierarchical Density Order EmbeddingsCode1
Multimodal Word DistributionsCode1
Contextual String Embeddings for Sequence LabelingCode0
Contrastive Learning in Distilled ModelsCode0
Show:102550
← PrevPage 10 of 161Next →

No leaderboard results yet.