SOTAVerified

Semantic Textual Similarity

Semantic textual similarity deals with determining how similar two pieces of texts are. This can take the form of assigning a score from 1 to 5. Related tasks are paraphrase or duplicate identification.

Image source: Learning Semantic Textual Similarity from Conversations

Papers

Showing 176200 of 2381 papers

TitleStatusHype
Fast and Accurate Deep Bidirectional Language Representations for Unsupervised LearningCode1
Efficient Neural Ranking using Forward IndexesCode1
FedSSA: Semantic Similarity-based Aggregation for Efficient Model-Heterogeneous Personalized Federated LearningCode1
Few-Shot Object Detection via Association and DIscriminationCode1
Balancing Lexical and Semantic Quality in Abstractive SummarizationCode1
FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual ModelsCode1
Frequency-driven Imperceptible Adversarial Attack on Semantic SimilarityCode1
Generating Natural Language Attacks in a Hard Label Black Box SettingCode1
Global Explainability of BERT-Based Evaluation Metrics by Disentangling along Linguistic FactorsCode1
Hard negative examples are hard, but usefulCode1
High Temporal Consistency through Semantic Similarity Propagation in Semi-Supervised Video Semantic Segmentation for Autonomous FlightCode1
Histopathology Whole Slide Image Analysis with Heterogeneous Graph Representation LearningCode1
How to Train BERT with an Academic BudgetCode1
Improving Contrastive Learning of Sentence Embeddings from AI FeedbackCode1
Improving Language Understanding by Generative Pre-TrainingCode1
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-TuningCode1
Inv-Entropy: A Fully Probabilistic Framework for Uncertainty Quantification in Language ModelsCode1
Are Large Language Models Ready for Healthcare? A Comparative Study on Clinical Language UnderstandingCode1
Just Rank: Rethinking Evaluation with Word and Sentence SimilaritiesCode1
KLUE: Korean Language Understanding EvaluationCode1
KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language UnderstandingCode1
Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label EmbeddingCode1
Language-agnostic BERT Sentence EmbeddingCode1
Attributable Visual Similarity LearningCode1
Attentive Normalization for Conditional Image GenerationCode1
Show:102550
← PrevPage 8 of 96Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SMARTRoBERTaDev Pearson Correlation92.8Unverified
2DeBERTa (large)Accuracy92.5Unverified
3SMART-BERTDev Pearson Correlation90Unverified
4MT-DNN-SMARTPearson Correlation0.93Unverified
5StructBERTRoBERTa ensemblePearson Correlation0.93Unverified
6Mnet-SimPearson Correlation0.93Unverified
7XLNet (single model)Pearson Correlation0.93Unverified
8ALBERTPearson Correlation0.93Unverified
9T5-11BPearson Correlation0.93Unverified
10RoBERTaPearson Correlation0.92Unverified
#ModelMetricClaimedVerifiedStatus
1AnglE-UAESpearman Correlation84.54Unverified
2ST5-XXLSpearman Correlation82.63Unverified
3ST5-LargeSpearman Correlation81.83Unverified
4ST5-XLSpearman Correlation81.66Unverified
5ST5-BaseSpearman Correlation81.14Unverified
6MPNet-multilingualSpearman Correlation80.73Unverified
7SGPT-5.8B-nliSpearman Correlation80.53Unverified
8MPNetSpearman Correlation80.28Unverified
9MiniLM-L12Spearman Correlation79.8Unverified
10SimCSE-BERT-supSpearman Correlation79.12Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAccuracy93.7Unverified
2ALBERTAccuracy93.4Unverified
3RoBERTa (ensemble)Accuracy92.3Unverified
4BigBirdF191.5Unverified
5StructBERTRoBERTa ensembleAccuracy91.5Unverified
6FLOATER-largeAccuracy91.4Unverified
7SMARTAccuracy91.3Unverified
8RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)Accuracy91Unverified
9RoBERTa-large 355M + Entailment as Few-shot LearnerF191Unverified
10SpanBERTAccuracy90.9Unverified
#ModelMetricClaimedVerifiedStatus
1PromCSE-RoBERTa-large (0.355B)Spearman Correlation0.82Unverified
2PromptEOL+CSE+LLaMA-30BSpearman Correlation0.82Unverified
3PromptEOL+CSE+OPT-13BSpearman Correlation0.82Unverified
4SimCSE-RoBERTalargeSpearman Correlation0.82Unverified
5PromptEOL+CSE+OPT-2.7BSpearman Correlation0.81Unverified
6SentenceBERTSpearman Correlation0.75Unverified
7SRoBERTa-NLI-baseSpearman Correlation0.74Unverified
8SRoBERTa-NLI-largeSpearman Correlation0.74Unverified
9Dino (STS/̄🦕)Spearman Correlation0.74Unverified
10SBERT-NLI-largeSpearman Correlation0.74Unverified
#ModelMetricClaimedVerifiedStatus
1AnglE-LLaMA-7BSpearman Correlation0.91Unverified
2AnglE-LLaMA-7B-v2Spearman Correlation0.91Unverified
3PromptEOL+CSE+LLaMA-30BSpearman Correlation0.9Unverified
4PromptEOL+CSE+OPT-13BSpearman Correlation0.9Unverified
5PromptEOL+CSE+OPT-2.7BSpearman Correlation0.9Unverified
6PromCSE-RoBERTa-large (0.355B)Spearman Correlation0.89Unverified
7Trans-Encoder-BERT-large-bi (unsup.)Spearman Correlation0.89Unverified
8Trans-Encoder-BERT-large-cross (unsup.)Spearman Correlation0.88Unverified
9Trans-Encoder-RoBERTa-large-cross (unsup.)Spearman Correlation0.88Unverified
10SimCSE-RoBERTa-largeSpearman Correlation0.87Unverified