SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 601610 of 4002 papers

TitleStatusHype
Sense Embeddings are also Biased--Evaluating Social Biases in Static and Contextualised Sense EmbeddingsCode0
VAST: The Valence-Assessing Semantics Test for Contextualizing Language ModelsCode0
Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations0
Survey on Automated Short Answer Grading with Deep Learning: from Word Embeddings to Transformers0
Using Word Embeddings to Analyze Protests News0
Semi-constraint Optimal Transport for Entity Alignment with Dangling CasesCode1
TextConvoNet:A Convolutional Neural Network based Architecture for Text Classification0
Unsupervised Alignment of Distributional Word Embeddings0
Plumeria at SemEval-2022 Task 6: Robust Approaches for Sarcasm Detection for English and Arabic Using Transformers and Data AugmentationCode0
Automated Single-Label Patent Classification using Ensemble Classifiers0
Show:102550
← PrevPage 61 of 401Next →

No leaderboard results yet.