SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 241250 of 4002 papers

TitleStatusHype
MLFMF: Data Sets for Machine Learning for Mathematical FormalizationCode1
Analogical Proportions and Creativity: A Preliminary Study0
GARI: Graph Attention for Relative Isomorphism of Arabic Word EmbeddingsCode0
ChatGPT-guided Semantics for Zero-shot LearningCode0
An Interpretable Deep-Learning Framework for Predicting Hospital Readmissions From Electronic Health Records0
Swap and Predict -- Predicting the Semantic Changes in Words across Corpora by Context SwappingCode0
Enhancing Interpretability using Human Similarity Judgements to Prune Word Embeddings0
Generative Adversarial Training for Text-to-Speech Synthesis Based on Raw Phonetic Input and Explicit Prosody ModellingCode2
Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performanceCode0
Breaking Down Word Semantics from Pre-trained Language Models through Layer-wise Dimension Selection0
Show:102550
← PrevPage 25 of 401Next →

No leaderboard results yet.