SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 39263950 of 4002 papers

TitleStatusHype
Sparse Victory -- A Large Scale Systematic Comparison of count-based and prediction-based vectorizers for text classificationCode0
"Wikily" Supervised Neural Translation Tailored to Cross-Lingual TasksCode0
A Bi-Encoder LSTM Model For Learning Unstructured DialogsCode0
The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble LearningCode0
Specialising Word Vectors for Lexical EntailmentCode0
How to evaluate word embeddings? On importance of data efficiency and simple supervised tasksCode0
How to Evaluate Word Representations of Informal Domain?Code0
How to Generate a Good Word Embedding?Code0
How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong Baselines, Comparative Analyses, and Some MisconceptionsCode0
Specializing Word Embeddings (for Parsing) by Information BottleneckCode0
How to Train good Word Embeddings for Biomedical NLPCode0
Modelling Salient Features as Directions in Fine-Tuned Semantic SpacesCode0
Word Embeddings Are Capable of Capturing Rhythmic Similarity of WordsCode0
Vocabulary Adaptation for Domain Adaptation in Neural Machine TranslationCode0
Humor in Word Embeddings: Cockamamie Gobbledegook for NincompoopsCode0
An Embedded Diachronic Sense Change Model with a Case Study from Ancient GreekCode0
Model Transfer for Tagging Low-resource Languages using a Bilingual DictionaryCode0
BULNER: BUg Localization with word embeddings and NEtwork RegularizationCode0
Building Sequential Inference Models for End-to-End Response SelectionCode0
A Neighbourhood-Aware Differential Privacy Mechanism for Static Word EmbeddingsCode0
SPINE: SParse Interpretable Neural EmbeddingsCode0
Will LLMs Replace the Encoder-Only Models in Temporal Relation Classification?Code0
MoNoise: Modeling Noise Using a Modular Normalization SystemCode0
Monolingual and Parallel Corpora for Kangri Low Resource LanguageCode0
Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network ModelsCode0
Show:102550
← PrevPage 158 of 161Next →

No leaderboard results yet.