SOTAVerified

QNLI

Papers

Showing 110 of 19 papers

TitleStatusHype
How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation ObjectivesCode1
Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical ReasoningCode1
EnCBP: A New Benchmark Dataset for Finer-Grained Cultural Background Prediction in English0
Few-shot Multimodal Multitask Multilingual Learning0
Two-in-One: A Model Hijacking Attack Against Text Generation Models0
DAWSON: Data Augmentation using Weak Supervision On Natural Language0
On the Importance of Local Information in Transformer Based Models0
Privacy-preserving Fine-tuning of Large Language Models through Flatness0
Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient BERT0
An Automatic and Efficient BERT Pruning for Edge AI Systems0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.