SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 37313740 of 4002 papers

TitleStatusHype
The Foundational Capabilities of Large Language Models in Predicting Postoperative Risks Using Clinical NotesCode0
The Limitations of Cross-language Word Embeddings EvaluationCode0
ChatGPT-guided Semantics for Zero-shot LearningCode0
A Causal Inference Method for Reducing Gender Bias in Word Embedding RelationsCode0
Lost in Evaluation: Misleading Benchmarks for Bilingual Dictionary InductionCode0
Feature-Dependent Confusion Matrices for Low-Resource NER Labeling with Noisy LabelsCode0
Low-Dimensional Structure in the Space of Language Representations is Reflected in Brain ResponsesCode0
Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling MechanismsCode0
A Hierarchically-Labeled Portuguese Hate Speech DatasetCode0
Pretrained Transformers for Simple Question Answering over Knowledge GraphsCode0
Show:102550
← PrevPage 374 of 401Next →

No leaderboard results yet.