SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 125 of 1746 papers

TitleStatusHype
AugLy: Data Augmentations for RobustnessCode5
LORE: Lagrangian-Optimized Robust Embeddings for Visual EncodersCode4
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial ExamplesCode3
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoningCode3
Adversarial Robustness Toolbox v1.0.0Code3
Improving Alignment and Robustness with Circuit BreakersCode3
CLAIMED, a visual and scalable component library for Trusted AICode2
On Evaluating Adversarial Robustness of Large Vision-Language ModelsCode2
Authorship Obfuscation in Multilingual Machine-Generated Text DetectionCode2
Fast Minimum-norm Adversarial Attacks through Adaptive Norm ConstraintsCode2
RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text DetectorsCode2
MIBench: A Comprehensive Framework for Benchmarking Model Inversion Attack and DefenseCode2
A Survey on Deep Neural Network Pruning-Taxonomy, Comparison, Analysis, and RecommendationsCode2
ALERT: A Comprehensive Benchmark for Assessing Large Language Models' Safety through Red TeamingCode2
An Unsupervised Approach to Achieve Supervised-Level Explainability in Healthcare RecordsCode2
One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language ModelsCode2
Dissecting Adversarial Robustness of Multimodal LM AgentsCode2
Artificial Kuramoto Oscillatory NeuronsCode2
Adversarial Attack and Defense in Deep RankingCode1
GenoArmory: A Unified Evaluation Framework for Adversarial Attacks on Genomic Foundation ModelsCode1
Adversarial Attack and Defense Strategies for Deep Speaker Recognition SystemsCode1
Adversarial Robustification via Text-to-Image Diffusion ModelsCode1
AdvDrop: Adversarial Attack to DNNs by Dropping InformationCode1
Adversarially Robust DistillationCode1
Adversarial Attack on Deep Learning-Based Splice LocalizationCode1
Show:102550
← PrevPage 1 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified