SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 501510 of 4002 papers

TitleStatusHype
Interpretable Neural Embeddings with Sparse Self-Representation0
Semi-automated extraction of research topics and trends from NCI funding in radiological sciences from 2000-20200
Constructing Colloquial Dataset for Persian Sentiment Analysis of Social MicroblogsCode0
A Bayesian approach to uncertainty in word embedding bias estimation0
"Definition Modeling: To model definitions." Generating Definitions With Little to No Semantics0
Contrastive Loss is All You Need to Recover Analogies as Parallel LinesCode0
Does mBERT understand Romansh? Evaluating word embeddings using word alignmentCode0
Curatr: A Platform for Semantic Analysis and Curation of Historical Literary Texts0
Enhancing Topic Extraction in Recommender Systems with Entropy Regularization0
Izindaba-Tindzaba: Machine learning news categorisation for Long and Short Text for isiZulu and SiswatiCode0
Show:102550
← PrevPage 51 of 401Next →

No leaderboard results yet.