SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 27912800 of 4002 papers

TitleStatusHype
Line-a-line: A Tool for Annotating Word-Alignments0
Linguistic change and historical periodization of Old Literary Finnish0
Linguistic Regularities in Sparse and Explicit Word Representations0
Linking News Sentiment to Microblogs: A Distributional Semantics Approach to Enhance Microblog Sentiment Classification0
Linking Tweets with Monolingual and Cross-Lingual News using Transformed Word Embeddings0
LIPN-IIMAS at SemEval-2016 Task 1: Random Forest Regression Experiments on Align-and-Differentiate and Word Embeddings penalizing strategies0
LIPN-IIMAS at SemEval-2017 Task 1: Subword Embeddings, Attention Recurrent Neural Networks and Cross Word Alignment for Semantic Textual Similarity0
LIPN-UAM at EmoInt-2017:Combination of Lexicon-based features and Sentence-level Vector Representations for Emotion Intensity Determination0
LIS at SemEval-2018 Task 2: Mixing Word Embeddings and Bag of Features for Multilingual Emoji Prediction0
Literal, Metphorical or Both? Detecting Metaphoricity in Isolated Adjective-Noun Phrases0
Show:102550
← PrevPage 280 of 401Next →

No leaderboard results yet.