SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 111120 of 4002 papers

TitleStatusHype
Misinforming LLMs: vulnerabilities, challenges and opportunities0
Ontological Relations from Word Embeddings0
Appformer: A Novel Framework for Mobile App Usage Prediction Leveraging Progressive Multi-Modal Data Fusion and Feature Extraction0
You shall know a piece by the company it keeps. Chess plays as a data for word2vec models0
The BIAS Detection Framework: Bias Detection in Word Embeddings and Language Models for European LanguagesCode0
On Initializing Transformers with Pre-trained Embeddings0
R-SFLLM: Jamming Resilient Framework for Split Federated Learning with Large Language Models0
WSI-VQA: Interpreting Whole Slide Images by Generative Visual Question AnsweringCode2
Cross-Lingual Word Alignment for ASEAN Languages with Contrastive Learning0
Investigating the Contextualised Word Embedding Dimensions Specified for Contextual and Temporal Semantic ChangesCode0
Show:102550
← PrevPage 12 of 401Next →

No leaderboard results yet.