SOTAVerified

Semantic Textual Similarity

Semantic textual similarity deals with determining how similar two pieces of texts are. This can take the form of assigning a score from 1 to 5. Related tasks are paraphrase or duplicate identification.

Image source: Learning Semantic Textual Similarity from Conversations

Papers

Showing 151200 of 2381 papers

TitleStatusHype
Efficient Neural Ranking using Forward IndexesCode1
A Simple Long-Tailed Recognition Baseline via Vision-Language ModelCode1
DrBenchmark: A Large Language Understanding Evaluation Benchmark for French Biomedical DomainCode1
Few-Shot Class-Incremental Learning via Training-Free Prototype CalibrationCode1
DriveDiTFit: Fine-tuning Diffusion Transformers for Autonomous DrivingCode1
FNet: Mixing Tokens with Fourier TransformsCode1
Binary Code Summarization: Benchmarking ChatGPT/GPT-4 and Other Large Language ModelsCode1
Big Bird: Transformers for Longer SequencesCode1
Global Explainability of BERT-Based Evaluation Metrics by Disentangling along Linguistic FactorsCode1
Graph-based Semantical Extractive Text AnalysisCode1
High Temporal Consistency through Semantic Similarity Propagation in Semi-Supervised Video Semantic Segmentation for Autonomous FlightCode1
HiHPQ: Hierarchical Hyperbolic Product Quantization for Unsupervised Image RetrievalCode1
Calibrating Higher-Order Statistics for Few-Shot Class-Incremental Learning with Pre-trained Vision TransformersCode1
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model BiasCode1
PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERTCode1
Improving Contrastive Learning of Sentence Embeddings from AI FeedbackCode1
A Statistical Framework for Low-bitwidth Training of Deep Neural NetworksCode1
A Sentence is Worth 128 Pseudo Tokens: A Semantic-Aware Contrastive Learning Framework for Sentence EmbeddingsCode1
Catch-A-Waveform: Learning to Generate Audio from a Single Short ExampleCode1
An Unsupervised Sentence Embedding Method by Mutual Information MaximizationCode1
CgAT: Center-Guided Adversarial Training for Deep Hashing-Based RetrievalCode1
CDF-RAG: Causal Dynamic Feedback for Adaptive Retrieval-Augmented GenerationCode1
Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMsCode1
Charformer: Fast Character Transformers via Gradient-based Subword TokenizationCode1
CODER: Knowledge infused cross-lingual medical term embedding for term normalizationCode1
Class-relation Knowledge Distillation for Novel Class DiscoveryCode1
Clustering-Aware Negative Sampling for Unsupervised Sentence RepresentationCode1
CmdCaliper: A Semantic-Aware Command-Line Embedding Model and Dataset for Security ResearchCode1
Compositional Evaluation on Japanese Textual Entailment and SimilarityCode1
KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language UnderstandingCode1
Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label EmbeddingCode1
Language-agnostic BERT Sentence EmbeddingCode1
Do Vision and Language Encoders Represent the World Similarly?Code1
ComStreamClust: a communicative multi-agent approach to text clustering in streaming dataCode1
Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive LearningCode1
Linked Credibility Reviews for Explainable Misinformation DetectionCode1
ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation TransferCode1
AutoKG: Efficient Automated Knowledge Graph Generation for Language ModelsCode1
Context Compression for Auto-regressive Transformers with Sentinel TokensCode1
Context-Aware Semantic Similarity Measurement for Unsupervised Word Sense DisambiguationCode1
Towards Better Understanding of User Satisfaction in Open-Domain Conversational SearchCode1
Distinguish Confusion in Legal Judgment Prediction via Revised Relation KnowledgeCode1
Are Large Language Models Ready for Healthcare? A Comparative Study on Clinical Language UnderstandingCode1
Reconstruct Your Previous Conversations! Comprehensively Investigating Privacy Leakage Risks in Conversations with GPT ModelsCode1
A Semantic-based Method for Unsupervised Commonsense Question AnsweringCode1
MENLI: Robust Evaluation Metrics from Natural Language InferenceCode1
Mining Gaze for Contrastive Learning toward Computer-Assisted DiagnosisCode1
Mitigating Hallucinations in Large Vision-Language Models by Adaptively Constraining Information FlowCode1
A large-scale computational study of content preservation measures for text style transfer and paraphrase generationCode1
Bootstrapped Unsupervised Sentence Representation LearningCode1
Show:102550
← PrevPage 4 of 48Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SMARTRoBERTaDev Pearson Correlation92.8Unverified
2DeBERTa (large)Accuracy92.5Unverified
3SMART-BERTDev Pearson Correlation90Unverified
4MT-DNN-SMARTPearson Correlation0.93Unverified
5StructBERTRoBERTa ensemblePearson Correlation0.93Unverified
6Mnet-SimPearson Correlation0.93Unverified
7XLNet (single model)Pearson Correlation0.93Unverified
8ALBERTPearson Correlation0.93Unverified
9T5-11BPearson Correlation0.93Unverified
10RoBERTaPearson Correlation0.92Unverified
#ModelMetricClaimedVerifiedStatus
1AnglE-UAESpearman Correlation84.54Unverified
2ST5-XXLSpearman Correlation82.63Unverified
3ST5-LargeSpearman Correlation81.83Unverified
4ST5-XLSpearman Correlation81.66Unverified
5ST5-BaseSpearman Correlation81.14Unverified
6MPNet-multilingualSpearman Correlation80.73Unverified
7SGPT-5.8B-nliSpearman Correlation80.53Unverified
8MPNetSpearman Correlation80.28Unverified
9MiniLM-L12Spearman Correlation79.8Unverified
10SimCSE-BERT-supSpearman Correlation79.12Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAccuracy93.7Unverified
2ALBERTAccuracy93.4Unverified
3RoBERTa (ensemble)Accuracy92.3Unverified
4BigBirdF191.5Unverified
5StructBERTRoBERTa ensembleAccuracy91.5Unverified
6FLOATER-largeAccuracy91.4Unverified
7SMARTAccuracy91.3Unverified
8RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)Accuracy91Unverified
9RoBERTa-large 355M + Entailment as Few-shot LearnerF191Unverified
10SpanBERTAccuracy90.9Unverified
#ModelMetricClaimedVerifiedStatus
1PromCSE-RoBERTa-large (0.355B)Spearman Correlation0.82Unverified
2PromptEOL+CSE+LLaMA-30BSpearman Correlation0.82Unverified
3PromptEOL+CSE+OPT-13BSpearman Correlation0.82Unverified
4SimCSE-RoBERTalargeSpearman Correlation0.82Unverified
5PromptEOL+CSE+OPT-2.7BSpearman Correlation0.81Unverified
6SentenceBERTSpearman Correlation0.75Unverified
7SRoBERTa-NLI-baseSpearman Correlation0.74Unverified
8SRoBERTa-NLI-largeSpearman Correlation0.74Unverified
9Dino (STS/̄🦕)Spearman Correlation0.74Unverified
10SBERT-NLI-largeSpearman Correlation0.74Unverified
#ModelMetricClaimedVerifiedStatus
1AnglE-LLaMA-7BSpearman Correlation0.91Unverified
2AnglE-LLaMA-7B-v2Spearman Correlation0.91Unverified
3PromptEOL+CSE+LLaMA-30BSpearman Correlation0.9Unverified
4PromptEOL+CSE+OPT-13BSpearman Correlation0.9Unverified
5PromptEOL+CSE+OPT-2.7BSpearman Correlation0.9Unverified
6PromCSE-RoBERTa-large (0.355B)Spearman Correlation0.89Unverified
7Trans-Encoder-BERT-large-bi (unsup.)Spearman Correlation0.89Unverified
8Trans-Encoder-BERT-large-cross (unsup.)Spearman Correlation0.88Unverified
9Trans-Encoder-RoBERTa-large-cross (unsup.)Spearman Correlation0.88Unverified
10SimCSE-RoBERTa-largeSpearman Correlation0.87Unverified