SOTAVerified

Sentence Embeddings

Papers

Showing 2650 of 615 papers

TitleStatusHype
Is Neural Topic Modelling Better than Clustering? An Empirical Study on Clustering with Contextual Embeddings for TopicsCode1
DialogueCSE: Dialogue-based Contrastive Learning of Sentence EmbeddingsCode1
KDMCSE: Knowledge Distillation Multimodal Sentence Embeddings with Adaptive Angular margin Contrastive LearningCode1
A Corpus for Multilingual Document Classification in Eight LanguagesCode1
Making Monolingual Sentence Embeddings Multilingual using Knowledge DistillationCode1
Efficient and Flexible Topic Modeling using Pretrained Embeddings and Bag of SentencesCode1
IESTAC: English-Italian Parallel Corpus for End-to-End Speech-to-Text Machine TranslationCode1
Deep Representational Re-tuning using Contrastive TensionCode1
DeCLUTR: Deep Contrastive Learning for Unsupervised Textual RepresentationsCode1
DefSent: Sentence Embeddings using Definition SentencesCode1
A Paragraph-level Multi-task Learning Model for Scientific Fact-VerificationCode1
AdaSent: Efficient Domain-Adapted Sentence Embeddings for Few-Shot ClassificationCode1
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based LearningCode1
An Unsupervised Sentence Embedding Method by Mutual Information MaximizationCode1
Contrastive Learning of Sentence Embeddings from ScratchCode1
DistilCSE: Effective Knowledge Distillation For Contrastive Sentence EmbeddingsCode1
Disentangling Semantics and Syntax in Sentence Embeddings with Pre-trained Language ModelsCode1
English Contrastive Learning Can Learn Universal Cross-lingual Sentence EmbeddingsCode1
ESimCSE: Enhanced Sample Building Method for Contrastive Learning of Unsupervised Sentence EmbeddingCode1
FinEAS: Financial Embedding Analysis of SentimentCode1
Generating Datasets with Pretrained Language ModelsCode1
A Sentence is Worth 128 Pseudo Tokens: A Semantic-Aware Contrastive Learning Framework for Sentence EmbeddingsCode1
Beyond Prompting: Making Pre-trained Language Models Better Zero-shot Learners by Clustering RepresentationsCode1
Improving Contrastive Learning of Sentence Embeddings with Case-Augmented Positives and Retrieved NegativesCode1
Characterising the Creative Process in Humans and Large Language ModelsCode1
Show:102550
← PrevPage 2 of 25Next →

No leaderboard results yet.