SOTAVerified

Semantic Textual Similarity

Semantic textual similarity deals with determining how similar two pieces of texts are. This can take the form of assigning a score from 1 to 5. Related tasks are paraphrase or duplicate identification.

Image source: Learning Semantic Textual Similarity from Conversations

Papers

Showing 226250 of 2381 papers

TitleStatusHype
RoMe: A Robust Metric for Evaluating Natural Language GenerationCode1
DrBenchmark: A Large Language Understanding Evaluation Benchmark for French Biomedical DomainCode1
Do Vision and Language Encoders Represent the World Similarly?Code1
DriveDiTFit: Fine-tuning Diffusion Transformers for Autonomous DrivingCode1
SAMScore: A Content Structural Similarity Metric for Image Translation EvaluationCode1
R&R: Metric-guided Adversarial Sentence GenerationCode1
A Statistical Framework for Low-bitwidth Training of Deep Neural NetworksCode1
AstroCLIP: A Cross-Modal Foundation Model for GalaxiesCode1
EASE: Entity-Aware Contrastive Learning of Sentence EmbeddingCode1
A Deep Reinforced Model for Zero-Shot Cross-Lingual Summarization with Bilingual Semantic Similarity RewardsCode1
Efficient Mask Correction for Click-Based Interactive Image SegmentationCode1
Self-Supervised Document Similarity Ranking via Contextualized Language Models and Hierarchical InferenceCode1
ELITE: Embedding-Less retrieval with Iterative Text ExplorationCode1
Semantic Pyramid for Image GenerationCode1
Encoding Surgical Videos as Latent Spatiotemporal Graphs for Object and Anatomy-Driven ReasoningCode1
Entailment as Few-Shot LearnerCode1
Entity Concept-enhanced Few-shot Relation ExtractionCode1
Attentive Normalization for Conditional Image GenerationCode1
SemEval-2024 Task 1: Semantic Textual Relatedness for African and Asian LanguagesCode1
AMR-DA: Data Augmentation by Abstract Meaning RepresentationCode1
Catch-A-Waveform: Learning to Generate Audio from a Single Short ExampleCode1
Evaluating Multimodal Representations on Visual Semantic Textual SimilarityCode1
Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text ModelsCode1
Explainable Legal Case Matching via Inverse Optimal Transport-based Rationale ExtractionCode1
Few-Shot Image Classification Benchmarks are Too Far From Reality: Build Back Better with Semantic Task SamplingCode1
Show:102550
← PrevPage 10 of 96Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SMARTRoBERTaDev Pearson Correlation92.8Unverified
2DeBERTa (large)Accuracy92.5Unverified
3SMART-BERTDev Pearson Correlation90Unverified
4MT-DNN-SMARTPearson Correlation0.93Unverified
5StructBERTRoBERTa ensemblePearson Correlation0.93Unverified
6Mnet-SimPearson Correlation0.93Unverified
7XLNet (single model)Pearson Correlation0.93Unverified
8ALBERTPearson Correlation0.93Unverified
9T5-11BPearson Correlation0.93Unverified
10RoBERTaPearson Correlation0.92Unverified
#ModelMetricClaimedVerifiedStatus
1AnglE-UAESpearman Correlation84.54Unverified
2ST5-XXLSpearman Correlation82.63Unverified
3ST5-LargeSpearman Correlation81.83Unverified
4ST5-XLSpearman Correlation81.66Unverified
5ST5-BaseSpearman Correlation81.14Unverified
6MPNet-multilingualSpearman Correlation80.73Unverified
7SGPT-5.8B-nliSpearman Correlation80.53Unverified
8MPNetSpearman Correlation80.28Unverified
9MiniLM-L12Spearman Correlation79.8Unverified
10SimCSE-BERT-supSpearman Correlation79.12Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAccuracy93.7Unverified
2ALBERTAccuracy93.4Unverified
3RoBERTa (ensemble)Accuracy92.3Unverified
4BigBirdF191.5Unverified
5StructBERTRoBERTa ensembleAccuracy91.5Unverified
6FLOATER-largeAccuracy91.4Unverified
7SMARTAccuracy91.3Unverified
8RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)Accuracy91Unverified
9RoBERTa-large 355M + Entailment as Few-shot LearnerF191Unverified
10SpanBERTAccuracy90.9Unverified
#ModelMetricClaimedVerifiedStatus
1PromCSE-RoBERTa-large (0.355B)Spearman Correlation0.82Unverified
2PromptEOL+CSE+LLaMA-30BSpearman Correlation0.82Unverified
3PromptEOL+CSE+OPT-13BSpearman Correlation0.82Unverified
4SimCSE-RoBERTalargeSpearman Correlation0.82Unverified
5PromptEOL+CSE+OPT-2.7BSpearman Correlation0.81Unverified
6SentenceBERTSpearman Correlation0.75Unverified
7SRoBERTa-NLI-baseSpearman Correlation0.74Unverified
8SRoBERTa-NLI-largeSpearman Correlation0.74Unverified
9Dino (STS/̄🦕)Spearman Correlation0.74Unverified
10SBERT-NLI-largeSpearman Correlation0.74Unverified
#ModelMetricClaimedVerifiedStatus
1AnglE-LLaMA-7BSpearman Correlation0.91Unverified
2AnglE-LLaMA-7B-v2Spearman Correlation0.91Unverified
3PromptEOL+CSE+LLaMA-30BSpearman Correlation0.9Unverified
4PromptEOL+CSE+OPT-13BSpearman Correlation0.9Unverified
5PromptEOL+CSE+OPT-2.7BSpearman Correlation0.9Unverified
6PromCSE-RoBERTa-large (0.355B)Spearman Correlation0.89Unverified
7Trans-Encoder-BERT-large-bi (unsup.)Spearman Correlation0.89Unverified
8Trans-Encoder-BERT-large-cross (unsup.)Spearman Correlation0.88Unverified
9Trans-Encoder-RoBERTa-large-cross (unsup.)Spearman Correlation0.88Unverified
10SimCSE-RoBERTa-largeSpearman Correlation0.87Unverified