SOTAVerified

Word Embeddings

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Papers

Showing 28012810 of 4002 papers

TitleStatusHype
BiLSTM-CRF for Persian Named-Entity Recognition ArmanPersoNERCorpus: the First Entity-Annotated Persian DatasetCode0
Can Domain Adaptation be Handled as Analogies?0
KIT-Multi: A Translation-Oriented Multilingual Embedding Corpus0
DeepTC -- An Extension of DKPro Text Classification for Fostering Reproducibility of Deep Learning Experiments0
Designing a Russian Idiom-Annotated Corpus0
SemR-11: A Multi-Lingual Gold-Standard for Semantic Similarity and Relatedness for Eleven Languages0
SenSALDO: Creating a Sentiment Lexicon for Swedish0
The German Reference Corpus DeReKo: New Developments -- New Opportunities0
Grapheme-level Awareness in Word Embeddings for Morphologically Rich Languages0
SimPA: A Sentence-Level Simplification Corpus for the Public Administration Domain0
Show:102550
← PrevPage 281 of 401Next →

No leaderboard results yet.