SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 10111020 of 4002 papers

TitleStatusHype
Clustering Word Embeddings with Self-Organizing Maps. Application on LaRoSeDa - A Large Romanian Sentiment Data Set0
Evaluating Neural Word Embeddings for SanskritCode0
NuPS: A Parameter Server for Machine Learning with Non-Uniform Parameter AccessCode1
Self-Supervised Euphemism Detection and Identification for Content ModerationCode1
Deep Neural Approaches to Relation Triplets Extraction: A Comprehensive Survey0
Locally-Contextual Nonlinear CRFs for Sequence Labeling0
Probabilistic Analogical Mapping with Semantic Relation Networks0
Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications0
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
An Introduction to Robust Graph Convolutional Networks0
Show:102550
← PrevPage 102 of 401Next →

No leaderboard results yet.