SOTAVerified

MRPC

Papers

Showing 125 of 30 papers

TitleStatusHype
Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical ReasoningCode1
SupCL-Seq: Supervised Contrastive Learning for Downstream Optimized Sequence RepresentationsCode1
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-TuningCode1
BET: A Backtranslation Approach for Easy Data Augmentation in Transformer-based Paraphrase Identification ContextCode1
Catastrophic Forgetting in LLMs: A Comparative Analysis Across Language Tasks0
Exploring RWKV for Sentence Embeddings: Layer-wise Analysis and Baseline Comparison for Semantic SimilarityCode0
Generating Synthetic Datasets for Few-shot Prompt Tuning0
Unlocking the Global Synergies in Low-Rank Adapters0
A General and Flexible Multi-concept Parsing Framework for Multilingual Semantic Matching0
Empirical Analysis of Efficient Fine-Tuning Methods for Large Pre-Trained Language Models0
DACBERT: Leveraging Dependency Agreement for Cost-Efficient Bert Pretraining0
MerA: Merging Pretrained Adapters For Few-Shot Learning0
Gradient-Based Word Substitution for Obstinate Adversarial Examples Generation in Language Models0
Typhoon: Towards an Effective Task-Specific Masking Strategy for Pre-trained Language Models0
Enhancing Text Generation with Cooperative TrainingCode0
CKG: Dynamic Representation Based on Context and Knowledge Graph0
Enhancing Task-Specific Distillation in Small Data Regimes through Language Generation0
An Automatic and Efficient BERT Pruning for Edge AI Systems0
LM-BFF-MS: Improving Few-Shot Fine-tuning of Language Models based on Multiple Soft Demonstration MemoryCode0
Towards Better Characterization of ParaphrasesCode0
DRONE: Data-aware Low-rank Compression for Large NLP Models0
Efficient Multi-Task Auxiliary Learning: Selecting Auxiliary Data by Feature SimilarityCode0
Towards Better Characterization of ParaphrasesCode0
Assessing the Eligibility of Backtranslated Samples Based on Semantic Similarity for the Paraphrase Identification Task0
Data-aware Low-Rank Compression for Large NLP Models0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.