SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 14261450 of 1808 papers

TitleStatusHype
Exacerbating Algorithmic Bias through Fairness AttacksCode0
Self-Supervised Contrastive Learning with Adversarial Perturbations for Defending Word Substitution-based AttacksCode0
PermuteAttack: Counterfactual Explanation of Machine Learning Credit ScorecardsCode0
EvoBA: An Evolution Strategy as a Strong Baseline forBlack-Box Adversarial AttacksCode0
GenAttack: Practical Black-box Attacks with Gradient-Free OptimizationCode0
Person Text-Image Matching via Text-Feature Interpretability Embedding and External Attack Node ImplantationCode0
Classification-by-Components: Probabilistic Modeling of Reasoning over a Set of ComponentsCode0
Evaluating the Validity of Word-level Adversarial Attacks with Large Language ModelsCode0
Evaluating the Robustness of Geometry-Aware Instance-Reweighted Adversarial TrainingCode0
Taking Care of The Discretization Problem: A Comprehensive Study of the Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer DomainCode0
Evaluating the Robustness of Adversarial Defenses in Malware Detection SystemsCode0
Generate synthetic samples from tabular dataCode0
Threatening Patch Attacks on Object Detection in Optical Remote Sensing ImagesCode0
Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and DefenseCode0
Adversarial Attack and Defense for Non-Parametric Two-Sample TestsCode0
Cheating Automatic Short Answer Grading: On the Adversarial Usage of Adjectives and AdverbsCode0
Generating Natural Adversarial ExamplesCode0
Generating Natural Language Adversarial Examples through Probability Weighted Word SaliencyCode0
Certified Defenses against Adversarial ExamplesCode0
A practical approach to evaluating the adversarial distance for machine learning classifiersCode0
Any Target Can be Offense: Adversarial Example Generation via Generalized Latent InfectionCode0
Generating Textual Adversaries with Minimal PerturbationCode0
Generating Unrestricted 3D Adversarial Point CloudsCode0
CAPAA: Classifier-Agnostic Projector-Based Adversarial AttackCode0
Adversarial attacks on neural networks through canonical Riemannian foliationsCode0
Show:102550
← PrevPage 58 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet20Test Accuracy89.9589.95(1)Community Verified
2Xu et al.Attack: PGD2078.68Unverified
33-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
4TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
5AdvTraining [madry2018]Attack: PGD2048.44Unverified
6TRADES [zhang2019b]Attack: PGD2045.9Unverified
7XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified