SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 51100 of 4002 papers

TitleStatusHype
Enhancing High-order Interaction Awareness in LLM-based Recommender ModelCode1
Entity Resolution with Hierarchical Graph Attention NetworksCode1
Brain2Word: Decoding Brain Activity for Language GenerationCode1
Fair Embedding Engine: A Library for Analyzing and Mitigating Gender Bias in Word EmbeddingsCode1
Fine-Tuning CLIP's Last Visual Projector: A Few-Shot CornucopiaCode1
FreeLB: Enhanced Adversarial Training for Natural Language UnderstandingCode1
GLOW : Global Weighted Self-Attention Network for Web SearchCode1
Gender Bias in Contextualized Word EmbeddingsCode1
Applying Occam's Razor to Transformer-Based Dependency Parsing: What Works, What Doesn't, and What is Really NecessaryCode1
A Graph Convolutional Topic Model for Short and Noisy Text StreamsCode1
GREEK-BERT: The Greeks visiting Sesame StreetCode1
Hierarchical Density Order EmbeddingsCode1
Circumventing Concept Erasure Methods For Text-to-Image Generative ModelsCode1
Compositional Demographic Word EmbeddingsCode1
Apples to Apples: A Systematic Evaluation of Topic ModelsCode1
A Source-Criticism Debiasing Method for GloVe EmbeddingsCode1
AnomalyLLM: Few-shot Anomaly Edge Detection for Dynamic Graphs using Large Language ModelsCode1
A Neural Few-Shot Text Classification Reality CheckCode1
Backpack Language ModelsCode1
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddingsCode1
BERT for Monolingual and Cross-Lingual Reverse DictionaryCode1
BERT Goes Shopping: Comparing Distributional Models for Product RepresentationsCode1
A Comprehensive Analysis of Static Word Embeddings for TurkishCode1
Can a Fruit Fly Learn Word Embeddings?Code1
Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM NetworkCode1
CODER: Knowledge infused cross-lingual medical term embedding for term normalizationCode1
ADEPT: A DEbiasing PrompT FrameworkCode1
Combining Static Word Embeddings and Contextual Representations for Bilingual Lexicon InductionCode1
ALL-IN-1: Short Text Classification with One Model for All LanguagesCode1
All Word Embeddings from One EmbeddingCode1
AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for Indic LanguagesCode1
Contextualized Embeddings based Transformer Encoder for Sentence Similarity Modeling in Answer Selection TaskCode1
Cooperative Self-training of Machine Reading ComprehensionCode1
Cross-lingual Transfer for Text Classification with Dictionary-based Heterogeneous GraphCode1
Cross-Lingual Word Embedding Refinement by _1 Norm OptimisationCode1
CTRAN: CNN-Transformer-based Network for Natural Language UnderstandingCode1
Zero-Shot Semantic SegmentationCode1
FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input RepresentationsCode1
Decoupled Textual Embeddings for Customized Image GenerationCode1
Deep Representation Learning of Electronic Health Records to Unlock Patient Stratification at ScaleCode1
DeFINE: DEep Factorized INput Token Embeddings for Neural Sequence ModelingCode1
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like BiasesCode1
“Did you really mean what you said?” : Sarcasm Detection in Hindi-English Code-Mixed Data using Bilingual Word EmbeddingsCode1
DiffEditor: Enhancing Speech Editing with Semantic Enrichment and Acoustic ConsistencyCode1
Affective and Contextual Embedding for Sarcasm DetectionCode1
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias MitigationCode1
Dynamic Contextualized Word EmbeddingsCode1
Effective Seed-Guided Topic Discovery by Integrating Multiple Types of ContextsCode1
Embarrassingly Simple Unsupervised Aspect ExtractionCode1
ALIGN-MLM: Word Embedding Alignment is Crucial for Multilingual Pre-trainingCode1
Show:102550
← PrevPage 2 of 81Next →

No leaderboard results yet.