SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 551575 of 1808 papers

TitleStatusHype
Multi-objective Evolutionary Search of Variable-length Composite Semantic Perturbations0
Single-Class Target-Specific Attack against Interpretable Deep Learning SystemsCode0
Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-IdentificationCode0
Brightness-Restricted Adversarial Attack Patch0
Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey0
Defense against Adversarial Cloud Attack on Remote Sensing Salient Object Detection0
Post-train Black-box Defense via Bayesian Boundary Correction0
Evaluating Similitude and Robustness of Deep Image Denoising Models via Adversarial Attack0
Towards Sybil Resilience in Decentralized Learning0
Cross-lingual Cross-temporal Summarization: Dataset, Models, EvaluationCode0
Adversarial Attacks Neutralization via Data Set Randomization0
Physics-constrained Attack against Convolution-based Human Motion PredictionCode0
Sample Attackability in Natural Language Adversarial AttacksCode0
You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks0
A Relaxed Optimization Approach for Adversarial Attacks against Neural Machine Translation Models0
Malafide: a novel adversarial convolutive noise attack against deepfake and spoofing detection systemsCode0
I See Dead People: Gray-Box Adversarial Attack on Image-To-Text Models0
Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions0
COVER: A Heuristic Greedy Adversarial Attack on Prompt-based Learning in Language Models0
Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning0
Expanding Scope: Adapting English Adversarial Attacks to ChineseCode0
Mitigating Evasion Attacks in Federated Learning-Based Signal Classifiers0
Towards Resilient and Secure Smart Grids against PMU Adversarial Attacks: A Deep Learning-Based Robust Data Engineering ApproachCode0
A Robust Likelihood Model for Novelty Detection0
Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception0
Show:102550
← PrevPage 23 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified