SOTAVerified

Semantic Textual Similarity

Semantic textual similarity deals with determining how similar two pieces of texts are. This can take the form of assigning a score from 1 to 5. Related tasks are paraphrase or duplicate identification.

Image source: Learning Semantic Textual Similarity from Conversations

Papers

Showing 5175 of 2381 papers

TitleStatusHype
Contrastive Prompting Enhances Sentence Embeddings in LLMs through Inference-Time SteeringCode0
Efficient Heuristics Generation for Solving Combinatorial Optimization Problems Using Large Language ModelsCode0
One-Step Offline Distillation of Diffusion-based Models via Koopman ModelingCode1
Community Search in Time-dependent Road-social Attributed Networks0
Fine-Grained ECG-Text Contrastive Learning via Waveform Understanding Enhancement0
ELITE: Embedding-Less retrieval with Iterative Text ExplorationCode1
Temporally-Grounded Language Generation: A Benchmark for Real-Time Vision-Language ModelsCode0
Evaluations at Work: Measuring the Capabilities of GenAI in Use0
AI-enhanced semantic feature norms for 786 concepts0
FlowDreamer: A RGB-D World Model with Flow-based Motion Representations for Robot Manipulation0
LDIR: Low-Dimensional Dense and Interpretable Text Embeddings with Relative RepresentationsCode0
Towards Automated Situation Awareness: A RAG-Based Framework for Peacebuilding Reports0
A 2D Semantic-Aware Position Encoding for Vision Transformers0
TrialMatchAI: An End-to-End AI-powered Clinical Trial Recommendation System to Streamline Patient-to-Trial Matching0
Are LLMs complicated ethical dilemma analyzers?Code0
Hypernym Mercury: Token Optimization Through Semantic Field Constriction And Reconstruction From Hypernyms. A New Text Compression Method0
Concept-Level Explainability for Auditing & Steering LLM ResponsesCode0
Jailbreaking the Text-to-Video Generative Models0
Estimating Quality in Therapeutic Conversations: A Multi-Dimensional Natural Language Processing Framework0
Sparse Attention Remapping with Clustering for Efficient LLM Decoding on PIM0
Stealthy LLM-Driven Data Poisoning Attacks Against Embedding-Based Retrieval-Augmented Recommender Systems0
R&B: Domain Regrouping and Data Mixture Balancing for Efficient Foundation Model Training0
Retrieval-Enhanced Few-Shot Prompting for Speech Event Extraction0
Homa at SemEval-2025 Task 5: Aligning Librarian Records with OntoAligner for Subject Tagging0
20min-XD: A Comparable Corpus of Swiss News ArticlesCode0
Show:102550
← PrevPage 3 of 96Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SMARTRoBERTaDev Pearson Correlation92.8Unverified
2DeBERTa (large)Accuracy92.5Unverified
3SMART-BERTDev Pearson Correlation90Unverified
4MT-DNN-SMARTPearson Correlation0.93Unverified
5StructBERTRoBERTa ensemblePearson Correlation0.93Unverified
6Mnet-SimPearson Correlation0.93Unverified
7XLNet (single model)Pearson Correlation0.93Unverified
8T5-11BPearson Correlation0.93Unverified
9ALBERTPearson Correlation0.93Unverified
10RoBERTaPearson Correlation0.92Unverified
#ModelMetricClaimedVerifiedStatus
1AnglE-UAESpearman Correlation84.54Unverified
2ST5-XXLSpearman Correlation82.63Unverified
3ST5-LargeSpearman Correlation81.83Unverified
4ST5-XLSpearman Correlation81.66Unverified
5ST5-BaseSpearman Correlation81.14Unverified
6MPNet-multilingualSpearman Correlation80.73Unverified
7SGPT-5.8B-nliSpearman Correlation80.53Unverified
8MPNetSpearman Correlation80.28Unverified
9MiniLM-L12Spearman Correlation79.8Unverified
10SimCSE-BERT-supSpearman Correlation79.12Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAccuracy93.7Unverified
2ALBERTAccuracy93.4Unverified
3RoBERTa (ensemble)Accuracy92.3Unverified
4BigBirdF191.5Unverified
5StructBERTRoBERTa ensembleAccuracy91.5Unverified
6FLOATER-largeAccuracy91.4Unverified
7SMARTAccuracy91.3Unverified
8RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)Accuracy91Unverified
9RoBERTa-large 355M + Entailment as Few-shot LearnerF191Unverified
10SpanBERTAccuracy90.9Unverified
#ModelMetricClaimedVerifiedStatus
1PromCSE-RoBERTa-large (0.355B)Spearman Correlation0.82Unverified
2PromptEOL+CSE+LLaMA-30BSpearman Correlation0.82Unverified
3PromptEOL+CSE+OPT-13BSpearman Correlation0.82Unverified
4SimCSE-RoBERTalargeSpearman Correlation0.82Unverified
5PromptEOL+CSE+OPT-2.7BSpearman Correlation0.81Unverified
6SentenceBERTSpearman Correlation0.75Unverified
7SRoBERTa-NLI-baseSpearman Correlation0.74Unverified
8SRoBERTa-NLI-largeSpearman Correlation0.74Unverified
9Dino (STS/̄🦕)Spearman Correlation0.74Unverified
10SBERT-NLI-largeSpearman Correlation0.74Unverified
#ModelMetricClaimedVerifiedStatus
1AnglE-LLaMA-7BSpearman Correlation0.91Unverified
2AnglE-LLaMA-7B-v2Spearman Correlation0.91Unverified
3PromptEOL+CSE+LLaMA-30BSpearman Correlation0.9Unverified
4PromptEOL+CSE+OPT-13BSpearman Correlation0.9Unverified
5PromptEOL+CSE+OPT-2.7BSpearman Correlation0.9Unverified
6PromCSE-RoBERTa-large (0.355B)Spearman Correlation0.89Unverified
7Trans-Encoder-BERT-large-bi (unsup.)Spearman Correlation0.89Unverified
8Trans-Encoder-BERT-large-cross (unsup.)Spearman Correlation0.88Unverified
9Trans-Encoder-RoBERTa-large-cross (unsup.)Spearman Correlation0.88Unverified
10SimCSE-RoBERTa-largeSpearman Correlation0.87Unverified