SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 226250 of 4002 papers

TitleStatusHype
Lego: Learning to Disentangle and Invert Personalized Concepts Beyond Object Appearance in Text-to-Image Diffusion Models0
Multilingual Word Embeddings for Low-Resource Languages using Anchors and a Chain of Related Languages0
Bit Cipher -- A Simple yet Powerful Word Representation System that Integrates Efficiently with Language Models0
Compositional Fusion of Signals in Data Embedding0
Spoken Word2Vec: Learning Skipgram Embeddings from SpeechCode0
OFA: A Framework of Initializing Unseen Subword Embeddings for Efficient Large-scale Multilingual Continued PretrainingCode1
Solving ARC visual analogies with neural embeddings and vector arithmetic: A generalized methodCode0
Word Definitions from Large Language Models0
How Abstract Is Linguistic Generalization in Large Language Models? Experiments with Argument StructureCode0
MatNexus: A Comprehensive Text Mining and Analysis Suite for Materials Discover0
Explainable Identification of Hate Speech towards Islam using Graph Neural Networks0
An Embedded Diachronic Sense Change Model with a Case Study from Ancient GreekCode0
Evaluation Framework for Understanding Sensitive Attribute Association Bias in Latent Factor Recommendation Algorithms0
ProMap: Effective Bilingual Lexicon Induction via Language Model PromptingCode0
Do Not Harm Protected Groups in Debiasing Language Representation Models0
MLFMF: Data Sets for Machine Learning for Mathematical FormalizationCode1
Analogical Proportions and Creativity: A Preliminary Study0
GARI: Graph Attention for Relative Isomorphism of Arabic Word EmbeddingsCode0
ChatGPT-guided Semantics for Zero-shot LearningCode0
An Interpretable Deep-Learning Framework for Predicting Hospital Readmissions From Electronic Health Records0
Swap and Predict -- Predicting the Semantic Changes in Words across Corpora by Context SwappingCode0
Enhancing Interpretability using Human Similarity Judgements to Prune Word Embeddings0
Generative Adversarial Training for Text-to-Speech Synthesis Based on Raw Phonetic Input and Explicit Prosody ModellingCode2
Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performanceCode0
Breaking Down Word Semantics from Pre-trained Language Models through Layer-wise Dimension Selection0
Show:102550
← PrevPage 10 of 161Next →

No leaderboard results yet.