SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 17761800 of 1808 papers

TitleStatusHype
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial ExamplesCode0
Certified Defenses against Adversarial ExamplesCode0
Deflecting Adversarial Attacks with Pixel DeflectionCode0
Generalizable Data-free Objective for Crafting Universal Adversarial PerturbationsCode1
Query-Efficient Black-box Adversarial Examples (superceded)Code0
Defense against Adversarial Attacks Using High-Level Representation Guided DenoiserCode0
Model Extraction Warning in MLaaS Paradigm0
Linear system security -- detection and correction of adversarial attacks in the noise-free case0
Provable defenses against adversarial examples via the convex outer adversarial polytopeCode0
Generating Natural Adversarial ExamplesCode0
Boosting Adversarial Attacks with MomentumCode0
Standard detectors aren't (currently) fooled by physical adversarial stop signs0
Resilient Learning-Based Control for Synchronization of Passive Multi-Agent Systems under Attack0
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial ExamplesCode0
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute ModelsCode0
Class-based Prediction Errors to Detect Hate Speech with Out-of-vocabulary Words0
Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep LearningCode0
Foolbox: A Python toolbox to benchmark the robustness of machine learning modelsCode2
Towards Deep Learning Models Resistant to Adversarial AttacksCode1
Adversarial and Clean Data Are Not TwinsCode0
Biologically inspired protection of deep networks from adversarial attacks0
Adversarial Examples for Semantic Segmentation and Object DetectionCode1
Tactics of Adversarial Attack on Deep Reinforcement Learning Agents0
On Detecting Adversarial PerturbationsCode0
Adversarial Images for Variational AutoencodersCode0
Show:102550
← PrevPage 72 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet20Test Accuracy89.9589.95(1)Community Verified
2Xu et al.Attack: PGD2078.68Unverified
33-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
4TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
5AdvTraining [madry2018]Attack: PGD2048.44Unverified
6TRADES [zhang2019b]Attack: PGD2045.9Unverified
7XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified