SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 471480 of 4002 papers

TitleStatusHype
Enhancing Interpretability using Human Similarity Judgements to Prune Word Embeddings0
An Interpretable Deep-Learning Framework for Predicting Hospital Readmissions From Electronic Health Records0
Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performanceCode0
Breaking Down Word Semantics from Pre-trained Language Models through Layer-wise Dimension Selection0
A Process for Topic Modelling Via Word Embeddings0
Detecting Unseen Multiword Expressions in American Sign Language0
A Neighbourhood-Aware Differential Privacy Mechanism for Static Word EmbeddingsCode0
Exploring Embeddings for Measuring Text Relatedness: Unveiling Sentiments and Relationships in Online Comments0
Leveraging Pretrained Image-text Models for Improving Audio-Visual Learning0
Neural approaches to spoken content embedding0
Show:102550
← PrevPage 48 of 401Next →

No leaderboard results yet.