SOTAVerified

Semantic Textual Similarity

Semantic textual similarity deals with determining how similar two pieces of texts are. This can take the form of assigning a score from 1 to 5. Related tasks are paraphrase or duplicate identification.

Image source: Learning Semantic Textual Similarity from Conversations

Papers

Showing 51100 of 2381 papers

TitleStatusHype
DrBenchmark: A Large Language Understanding Evaluation Benchmark for French Biomedical DomainCode1
Automatic Generation of Topic LabelsCode1
Balancing Lexical and Semantic Quality in Abstractive SummarizationCode1
3D-AVS: LiDAR-based 3D Auto-Vocabulary SegmentationCode1
EASE: Entity-Aware Contrastive Learning of Sentence EmbeddingCode1
Attentive Normalization for Conditional Image GenerationCode1
Efficient Mask Correction for Click-Based Interactive Image SegmentationCode1
ELITE: Embedding-Less retrieval with Iterative Text ExplorationCode1
DiffSim: Taming Diffusion Models for Evaluating Visual SimilarityCode1
Entity Concept-enhanced Few-shot Relation ExtractionCode1
DenoSent: A Denoising Objective for Self-Supervised Sentence Representation LearningCode1
Big Bird: Transformers for Longer SequencesCode1
Demystifying and Extracting Fault-indicating Information from Logs for Failure DiagnosisCode1
Describing Sets of Images with Textual-PCACode1
DIP: Dual Incongruity Perceiving Network for Sarcasm DetectionCode1
Deep Fusion Transformer Network with Weighted Vector-Wise Keypoints Voting for Robust 6D Object Pose EstimationCode1
ARMAN: Pre-training with Semantically Selecting and Reordering of Sentences for Persian Abstractive SummarizationCode1
Deep Metric Learning by Online Soft Mining and Class-Aware AttentionCode1
C-STS: Conditional Semantic Textual SimilarityCode1
Cross-lingual Text Classification with Heterogeneous Graph Neural NetworkCode1
Debiased Contrastive Learning of Unsupervised Sentence RepresentationsCode1
Deep Representational Re-tuning using Contrastive TensionCode1
DistilCSE: Effective Knowledge Distillation For Contrastive Sentence EmbeddingsCode1
ContraCLM: Contrastive Learning For Causal Language ModelCode1
Context Compression for Auto-regressive Transformers with Sentinel TokensCode1
Reconstruct Your Previous Conversations! Comprehensively Investigating Privacy Leakage Risks in Conversations with GPT ModelsCode1
ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation TransferCode1
ComStreamClust: a communicative multi-agent approach to text clustering in streaming dataCode1
A Semantic-based Method for Unsupervised Commonsense Question AnsweringCode1
Audio-Visual Class-Incremental LearningCode1
Context-Aware Semantic Similarity Measurement for Unsupervised Word Sense DisambiguationCode1
DataSculpt: Crafting Data Landscapes for Long-Context LLMs through Multi-Objective PartitioningCode1
Towards Better Understanding of User Satisfaction in Open-Domain Conversational SearchCode1
Are Large Language Models Ready for Healthcare? A Comparative Study on Clinical Language UnderstandingCode1
CmdCaliper: A Semantic-Aware Command-Line Embedding Model and Dataset for Security ResearchCode1
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based LearningCode1
Attributable Visual Similarity LearningCode1
DeepSim: Semantic similarity metrics for learned image registrationCode1
A Simple Long-Tailed Recognition Baseline via Vision-Language ModelCode1
A Sentence is Worth 128 Pseudo Tokens: A Semantic-Aware Contrastive Learning Framework for Sentence EmbeddingsCode1
A Statistical Framework for Low-bitwidth Training of Deep Neural NetworksCode1
AstroCLIP: A Cross-Modal Foundation Model for GalaxiesCode1
An Efficient Self-Supervised Cross-View Training For Sentence EmbeddingCode1
DialogueCSE: Dialogue-based Contrastive Learning of Sentence EmbeddingsCode1
CODER: Knowledge infused cross-lingual medical term embedding for term normalizationCode1
An Unsupervised Sentence Embedding Method by Mutual Information MaximizationCode1
AutoGCL: Automated Graph Contrastive Learning via Learnable View GeneratorsCode1
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighterCode1
Clustering-Aware Negative Sampling for Unsupervised Sentence RepresentationCode1
Compositional Evaluation on Japanese Textual Entailment and SimilarityCode1
Show:102550
← PrevPage 2 of 48Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SMARTRoBERTaDev Pearson Correlation92.8Unverified
2DeBERTa (large)Accuracy92.5Unverified
3SMART-BERTDev Pearson Correlation90Unverified
4MT-DNN-SMARTPearson Correlation0.93Unverified
5StructBERTRoBERTa ensemblePearson Correlation0.93Unverified
6Mnet-SimPearson Correlation0.93Unverified
7XLNet (single model)Pearson Correlation0.93Unverified
8ALBERTPearson Correlation0.93Unverified
9T5-11BPearson Correlation0.93Unverified
10RoBERTaPearson Correlation0.92Unverified
#ModelMetricClaimedVerifiedStatus
1AnglE-UAESpearman Correlation84.54Unverified
2ST5-XXLSpearman Correlation82.63Unverified
3ST5-LargeSpearman Correlation81.83Unverified
4ST5-XLSpearman Correlation81.66Unverified
5ST5-BaseSpearman Correlation81.14Unverified
6MPNet-multilingualSpearman Correlation80.73Unverified
7SGPT-5.8B-nliSpearman Correlation80.53Unverified
8MPNetSpearman Correlation80.28Unverified
9MiniLM-L12Spearman Correlation79.8Unverified
10SimCSE-BERT-supSpearman Correlation79.12Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAccuracy93.7Unverified
2ALBERTAccuracy93.4Unverified
3RoBERTa (ensemble)Accuracy92.3Unverified
4BigBirdF191.5Unverified
5StructBERTRoBERTa ensembleAccuracy91.5Unverified
6FLOATER-largeAccuracy91.4Unverified
7SMARTAccuracy91.3Unverified
8RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)Accuracy91Unverified
9RoBERTa-large 355M + Entailment as Few-shot LearnerF191Unverified
10SpanBERTAccuracy90.9Unverified
#ModelMetricClaimedVerifiedStatus
1PromCSE-RoBERTa-large (0.355B)Spearman Correlation0.82Unverified
2PromptEOL+CSE+LLaMA-30BSpearman Correlation0.82Unverified
3PromptEOL+CSE+OPT-13BSpearman Correlation0.82Unverified
4SimCSE-RoBERTalargeSpearman Correlation0.82Unverified
5PromptEOL+CSE+OPT-2.7BSpearman Correlation0.81Unverified
6SentenceBERTSpearman Correlation0.75Unverified
7SRoBERTa-NLI-baseSpearman Correlation0.74Unverified
8SRoBERTa-NLI-largeSpearman Correlation0.74Unverified
9Dino (STS/̄🦕)Spearman Correlation0.74Unverified
10SBERT-NLI-largeSpearman Correlation0.74Unverified
#ModelMetricClaimedVerifiedStatus
1AnglE-LLaMA-7BSpearman Correlation0.91Unverified
2AnglE-LLaMA-7B-v2Spearman Correlation0.91Unverified
3PromptEOL+CSE+LLaMA-30BSpearman Correlation0.9Unverified
4PromptEOL+CSE+OPT-13BSpearman Correlation0.9Unverified
5PromptEOL+CSE+OPT-2.7BSpearman Correlation0.9Unverified
6PromCSE-RoBERTa-large (0.355B)Spearman Correlation0.89Unverified
7Trans-Encoder-BERT-large-bi (unsup.)Spearman Correlation0.89Unverified
8Trans-Encoder-BERT-large-cross (unsup.)Spearman Correlation0.88Unverified
9Trans-Encoder-RoBERTa-large-cross (unsup.)Spearman Correlation0.88Unverified
10SimCSE-RoBERTa-largeSpearman Correlation0.87Unverified