SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 14011410 of 4002 papers

TitleStatusHype
LT3 at SemEval-2020 Task 9: Cross-lingual Embeddings for Sentiment Analysis of Hinglish Social Media Text0
Latte-Mix: Measuring Sentence Semantic Similarity with Latent Categorical Mixtures0
What makes multilingual BERT multilingual?0
Image Captioning with Visual Object Representations Grounded in the Textual Modality0
Generating Fact Checking Summaries for Web ClaimsCode0
Multi-Adversarial Learning for Cross-Lingual Word Embeddings0
A Self-supervised Representation Learning of Sentence Structure for Authorship AttributionCode0
From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli?0
Legal Document Classification: An Application to Law Area Prediction of Petitions to Public Prosecution Service0
BRUMS at SemEval-2020 Task 3: Contextualised Embeddings for Predicting the (Graded) Effect of Context in Word SimilarityCode0
Show:102550
← PrevPage 141 of 401Next →

No leaderboard results yet.