SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 12511275 of 1808 papers

TitleStatusHype
Gradient-guided Unsupervised Text Style Transfer via Contrastive Learning0
Evaluating the Robustness of the "Ensemble Everything Everywhere" Defense0
GradMDM: Adversarial Attack on Dynamic Networks0
Graphfool: Targeted Label Adversarial Attack on Graph Embedding0
GraphMU: Repairing Robustness of Graph Neural Networks via Machine Unlearning0
Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents0
GreedyPixel: Fine-Grained Black-Box Adversarial Attack Via Greedy Algorithm0
Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks0
Harmonic Adversarial Attack Method0
Harmonicity Plays a Critical Role in DNN Based Versus in Biologically-Inspired Monaural Speech Segregation Systems0
Headless Horseman: Adversarial Attacks on Transfer Learning Models0
Hear No Evil: Towards Adversarial Robustness of Automatic Speech Recognition via Multi-Task Learning0
Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds0
Hessian-Aware Zeroth-Order Optimization for Black-Box Adversarial Attack0
Heterogeneous Architecture Search Approach within Adversarial Dynamic Defense Framework0
Heterogeneous Multi-Player Multi-Armed Bandits Robust To Adversarial Attacks0
HGAttack: Transferable Heterogeneous Graph Adversarial Attack0
Hiding Backdoors within Event Sequence Data via Poisoning Attacks0
Hijacking Vision-and-Language Navigation Agents with Adversarial Environmental Attacks0
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems0
Homomorphic Encryption and Federated Learning based Privacy-Preserving CNN Training: COVID-19 Detection Use-Case0
How Sampling Impacts the Robustness of Stochastic Neural Networks0
Hybrid Classical-Quantum Deep Learning Models for Autonomous Vehicle Traffic Image Classification Under Adversarial Attack0
Hydra: An Agentic Reasoning Approach for Enhancing Adversarial Robustness and Mitigating Hallucinations in Vision-Language Models0
HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure Attack of Hypergraph Neural Networks0
Show:102550
← PrevPage 51 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified