SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 28812890 of 4002 papers

TitleStatusHype
Understanding and Improving Multi-Sense Word Embeddings via Extended Robust Principal Component Analysis0
Exploring Word Sense Disambiguation Abilities of Neural Machine Translation Systems (Non-archival Extended Abstract)0
A Fast Deep Learning Model for Textual Relevance in Biomedical Information Retrieval0
URLNet: Learning a URL Representation with Deep Learning for Malicious URL DetectionCode0
A Neurobiologically Motivated Analysis of Distributional Semantic Models0
Semantic projection: recovering human knowledge of multiple, distinct object features from word embeddings0
Disunited Nations? A Multiplex Network Approach to Detecting Preference Affinity Blocs using Texts and Votes0
A novel methodology on distributed representations of proteins using their interacting ligands0
Unsupervised Open Relation ExtractionCode0
Early Detection of Social Media Hoaxes at Scale0
Show:102550
← PrevPage 289 of 401Next →

No leaderboard results yet.