SOTAVerified

SST-2

Papers

Showing 2650 of 66 papers

TitleStatusHype
Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Text Classification via Large Language ModelsCode1
Two-in-One: A Model Hijacking Attack Against Text Generation Models0
Masked Language Model Based Textual Adversarial Example DetectionCode0
TrojText: Test-time Invisible Textual Trojan InsertionCode0
Few-shot Multimodal Multitask Multilingual Learning0
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE0
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing0
Adaptive Deep Neural Network Inference Optimization with EENetCode1
ScaleFL: Resource-Adaptive Federated Learning With Heterogeneous ClientsCode1
RPN: A Word Vector Level Data Augmentation Algorithm in Deep Learning for Language UnderstandingCode0
Uncertainty Sentence Sampling by Virtual Adversarial Perturbation0
Are Sample-Efficient NLP Models More Robust?0
Enhancing Task-Specific Distillation in Small Data Regimes through Language Generation0
ELECTRA is a Zero-Shot Learner, TooCode0
Improving the Adversarial Robustness of NLP Models by Information BottleneckCode0
Leveraging QA Datasets to Improve Generative Data AugmentationCode0
LM-BFF-MS: Improving Few-Shot Fine-tuning of Language Models based on Multiple Soft Demonstration MemoryCode0
TangoBERT: Reducing Inference Cost by using Cascaded Architecture0
A Generative Language Model for Few-shot Aspect-Based Sentiment AnalysisCode1
EnCBP: A New Benchmark Dataset for Finer-Grained Cultural Background Prediction in English0
Generating Training Data with Language Models: Towards Zero-Shot Language UnderstandingCode1
How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis TaskCode0
DAWSON: Data Augmentation using Weak Supervision On Natural Language0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.