SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 110 of 1746 papers

TitleStatusHype
Bridging Robustness and Generalization Against Word Substitution Attacks in NLP via the Growth Bound Matrix ApproachCode0
Tail-aware Adversarial Attacks: A Distributional Approach to Efficient LLM Jailbreaking0
Rectifying Adversarial Sample with Low Entropy Prior for Test-Time Defense0
Evaluating the Evaluators: Trust in Adversarial Robustness Tests0
Is Reasoning All You Need? Probing Bias in the Age of Reasoning Language Models0
NIC-RobustBench: A Comprehensive Open-Source Toolkit for Neural Image Compression and Robustness AnalysisCode1
PRISON: Unmasking the Criminal Potential of Large Language Models0
Intriguing Frequency Interpretation of Adversarial Robustness for CNNs and ViTs0
NAP-Tuning: Neural Augmented Prompt Tuning for Adversarially Robust Vision-Language Models0
Canonical Latent Representations in Conditional Diffusion Models0
Show:102550
← PrevPage 1 of 175Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified