SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 12011250 of 1746 papers

TitleStatusHype
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging0
GARNET: A Spectral Approach to Robust and Scalable Graph Neural Networks0
General Coded Computing: Adversarial Settings0
Generalizability of Adversarial Robustness Under Distribution Shifts0
Generalizable Deepfake Detection with Phase-Based Motion Analysis0
Generalization Certificates for Adversarially Robust Bayesian Linear Regression0
Generalization Error Analysis of Neural networks with Gradient Based Regularization0
Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness0
Incorporating Hidden Layer representation into Adversarial Attacks and Defences0
Generalized but not Robust? Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness0
Generalizing and Improving Jacobian and Hessian Regularization0
Generate and Verify: Semantically Meaningful Formal Analysis of Neural Network Perception Systems0
Generating Structured Adversarial Attacks Using Frank-Wolfe Method0
GenFighter: A Generative and Evolutive Textual Attack Removal0
GenLabel: Mixup Relabeling using Generative Models0
GenMix: Effective Data Augmentation with Generative Diffusion Model Image Editing0
GHN-Q: Parameter Prediction for Unseen Quantized Convolutional Architectures via Graph Hypernetworks0
Global Adversarial Robustness Guarantees for Neural Networks0
GNN-Ensemble: Towards Random Decision Graph Neural Networks0
GPS: Graph Contrastive Learning via Multi-scale Augmented Views from Adversarial Pooling0
GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization0
GridMix: Strong regularization through local context mapping0
Grimm: A Plug-and-Play Perturbation Rectifier for Graph Neural Networks Defending against Poisoning Attacks0
Guess First to Enable Better Compression and Adversarial Robustness0
Guidance Through Surrogate: Towards a Generic Diagnostic Attack0
Guided Interpolation for Adversarial Training0
Harmonizing Feature Maps: A Graph Convolutional Approach for Enhancing Adversarial Robustness0
Hear No Evil: Towards Adversarial Robustness of Automatic Speech Recognition via Multi-Task Learning0
Heterogeneous Architecture Search Approach within Adversarial Dynamic Defense Framework0
Hierarchical Binding in Convolutional Neural Networks Confers Adversarial Robustness0
Hierarchical Contextual Manifold Alignment for Structuring Latent Representations in Large Language Models0
Hierarchical Verification for Adversarial Robustness0
Holistic Adversarially Robust Pruning0
Holistic Adversarial Robustness of Deep Learning Models0
Homophily-Driven Sanitation View for Robust Graph Contrastive Learning0
On Transfer of Adversarial Robustness from Pretraining to Downstream Tasks0
How and When Adversarial Robustness Transfers in Knowledge Distillation?0
How benign is benign overfitting?0
How Benign is Benign Overfitting ?0
How Do Diffusion Models Improve Adversarial Robustness?0
How do SGD hyperparameters in natural training affect adversarial robustness?0
Towards Adversarially Robust Recommendation from Adaptive Fraudster Detection0
How Robust are Randomized Smoothing based Defenses to Data Poisoning?0
How to beat a Bayesian adversary0
How to Enhance Downstream Adversarial Robustness (almost) without Touching the Pre-Trained Foundation Model?0
How to Select One Among All ? An Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding0
Hybrid Deep Learning Model using SPCAGAN Augmentation for Insider Threat Analysis0
Hydra: An Agentic Reasoning Approach for Enhancing Adversarial Robustness and Mitigating Hallucinations in Vision-Language Models0
Hyper Adversarial Tuning for Boosting Adversarial Robustness of Pretrained Large Vision Models0
Hyperbolic Contrastive Learning0
Show:102550
← PrevPage 25 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified