SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 31613170 of 4002 papers

TitleStatusHype
Data-Driven Detection of General Chiasmi Using Lexical and Semantic FeaturesCode0
Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word EmbeddingsCode0
Data-driven models and computational tools for neurolinguistics: a language technology perspectiveCode0
Authorless Topic Models: Biasing Models Away from Known StructureCode0
Context Selection for Embedding ModelsCode0
Subspace Detours: Building Transport Plans that are Optimal on Subspace ProjectionsCode0
DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment AnalysisCode0
Inducing a Lexicon of Abusive Words – a Feature-Based ApproachCode0
Substitute Based SCODE Word Embeddings in Supervised NLP TasksCode0
Multimodal Review Generation with Privacy and Fairness AwarenessCode0
Show:102550
← PrevPage 317 of 401Next →

No leaderboard results yet.