SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 15511575 of 1746 papers

TitleStatusHype
SOAR: Second-Order Adversarial Regularization0
Improving out-of-distribution generalization via multi-task self-supervised pretraining0
Towards Deep Learning Models Resistant to Large PerturbationsCode0
Challenging the adversarial robustness of DNNs based on error-correcting output codes0
Defense Through Diverse Directions0
Architectural Resilience to Foreground-and-Background Adversarial NoiseCode0
SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing0
Metrics and methods for robustness evaluation of neural networks with generative modelsCode0
Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative ModelsCode0
Defense-PointNet: Protecting PointNet Against Adversarial Attacks0
Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural NetworksCode0
Towards Certifiable Adversarial Sample Detection0
Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness0
Scalable Quantitative Verification For Deep Neural Networks0
CEB Improves Model RobustnessCode0
Semialgebraic Optimization for Lipschitz Constants of ReLU NetworksCode0
Assessing the Adversarial Robustness of Monte Carlo and Distillation Methods for Deep Bayesian Neural Network Classification0
Guess First to Enable Better Compression and Adversarial Robustness0
RECAST: Interactive Auditing of Automatic Toxicity Detection Models0
Optimal Statistical Guaratees for Adversarially Robust Gaussian Classification0
Adversarially Robust Neural Networks via Optimal Control: Bridging Robustness with Lyapunov Stability0
Smooth Kernels Improve Adversarial Robustness and Perceptually-Aligned Gradients0
Adversarial Robustness via Runtime Masking and Cleansing0
Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability0
Optimising Neural Network Architectures for Provable Adversarial Robustness0
Show:102550
← PrevPage 63 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified