SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 28412850 of 4002 papers

TitleStatusHype
Misinforming LLMs: vulnerabilities, challenges and opportunities0
Decoupled Vocabulary Learning Enables Zero-Shot Translation from Unseen Languages0
A Simplified Retriever to Improve Accuracy of Phenotype Normalizations by Large Language Models0
300-sparsans at SemEval-2018 Task 9: Hypernymy as interaction of sparse attributes0
4chan & 8chan embeddings0
A bag-of-concepts model improves relation extraction in a narrow knowledge domain with limited data0
A Bayesian approach to uncertainty in word embedding bias estimation0
ABDN at SemEval-2018 Task 10: Recognising Discriminative Attributes using Context Embeddings and WordNet0
Abelian Neural Networks0
A bilingual approach to specialised adjectives through word embeddings in the karstology domain0
Show:102550
← PrevPage 285 of 401Next →

No leaderboard results yet.