SOTAVerified

QNLI

Papers

Showing 119 of 19 papers

TitleStatusHype
How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation ObjectivesCode1
Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical ReasoningCode1
Enhancing LLM Robustness to Perturbed Instructions: An Empirical StudyCode0
Privacy-preserving Fine-tuning of Large Language Models through Flatness0
Here's a Free Lunch: Sanitizing Backdoored Models with Model MergeCode0
NewsQs: Multi-Source Question Generation for the Inquiring Mind0
Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient BERT0
Meta-training with Demonstration Retrieval for Efficient Few-shot Learning0
Two-in-One: A Model Hijacking Attack Against Text Generation Models0
Few-shot Multimodal Multitask Multilingual Learning0
An Automatic and Efficient BERT Pruning for Edge AI Systems0
Learning Rate CurriculumCode0
EnCBP: A New Benchmark Dataset for Finer-Grained Cultural Background Prediction in English0
DAWSON: Data Augmentation using Weak Supervision On Natural Language0
How effective is BERT without word ordering? Implications for language understanding and data privacy0
KI-BERT: Infusing Knowledge Context for Better Language and Domain Understanding0
Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Continuous SpaceCode0
Margin-Based Regularization and Selective Sampling in Deep Neural Networks0
On the Importance of Local Information in Transformer Based Models0
Show:102550

No leaderboard results yet.