SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 24262450 of 4002 papers

TitleStatusHype
A Framework for Understanding the Role of Morphology in Universal Dependency Parsing0
Unsupervised Bilingual Lexicon Induction via Latent Variable Models0
CEA LIST: Processing Low-Resource Languages for CoNLL 2018Code0
Self-Governing Neural Networks for On-Device Short Text ClassificationCode0
Joint Learning for Targeted Sentiment Analysis0
Learning Representations for Detecting Abusive Language0
IRISA at SMM4H 2018: Neural Network and Bagging for Tweet Classification0
CARER: Contextualized Affect Representations for Emotion RecognitionCode0
Unsupervised Parallel Sentence Extraction from Comparable Corpora0
Self-training improves Recurrent Neural Networks performance for Temporal Relation Extraction0
Resources to Examine the Quality of Word Embedding Models Trained on n-Gram Data0
Learning Text Representations for 500K Classification Tasks on Named Entity DisambiguationCode0
Learning Unsupervised Word Translations Without Adversaries0
InferLite: Simple Universal Sentence Representations from Natural Language Inference Data0
In-domain Context-aware Token Embeddings Improve Biomedical Named Entity Recognition0
Word Relation Autoencoder for Unseen Hypernym Extraction Using Word Embeddings0
Refining Pretrained Word Embeddings Using Layer-wise Relevance Propagation0
Word Embeddings for Code-Mixed Language Processing0
A Morphology-Based Representation Model for LSTM-Based Dependency Parsing of Agglutinative LanguagesCode0
Coming to Your Senses: on Controls and Evaluation Sets in Polysemy Research0
Bringing Order to Neural Word Embeddings with Embeddings Augmented by Random Permutations (EARP)0
Limbic: Author-Based Sentiment Aspect Modeling Regularized with Word Embeddings and Discourse Relations0
Improved Dependency Parsing using Implicit Word Connections Learned from Unlabeled Data0
Linking News Sentiment to Microblogs: A Distributional Semantics Approach to Enhance Microblog Sentiment Classification0
Implicit Subjective and Sentimental Usages in Multi-sense Word Embeddings0
Show:102550
← PrevPage 98 of 161Next →

No leaderboard results yet.