SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 601650 of 1808 papers

TitleStatusHype
AN-GCN: An Anonymous Graph Convolutional Network Defense Against Edge-Perturbing Attack0
A Non-monotonic Smooth Activation Function0
Evaluations and Methods for Explanation through Robustness Analysis0
Experimental robustness benchmark of quantum neural network on a superconducting quantum processor0
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples0
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks0
Fair Robust Active Learning by Joint Inconsistency0
Adversarial defenses via a mixture of generators0
DIMBA: Discretely Masked Black-Box Attack in Single Object Tracking0
Attacking c-MARL More Effectively: A Data Driven Approach0
Adversarial Defense based on Structure-to-Signal Autoencoders0
Differentially Private Reward Estimation with Preference Feedback0
Evaluating Neural Model Robustness for Machine Comprehension0
Evaluating Similitude and Robustness of Deep Image Denoising Models via Adversarial Attack0
Adversarial Data Encryption0
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models0
An Empirical Study towards Characterizing Deep Learning Development and Deployment across Different Frameworks and Platforms0
Device-aware Optical Adversarial Attack for a Portable Projector-camera System0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
Adversarial Color Projection: A Projector-based Physical Attack to DNNs0
Evaluating Deep Learning Models and Adversarial Attacks on Accelerometer-Based Gesture Authentication0
Evaluating the Robustness of LiDAR Point Cloud Tracking Against Adversarial Attack0
Adversarial Body Shape Search for Legged Robots0
An Empirical Study on Adversarial Attack on NMT: Languages and Positions Matter0
DFT-Based Adversarial Attack Detection in MRI Brain Imaging: Enhancing Diagnostic Accuracy in Alzheimer's Case Studies0
ErasableMask: A Robust and Erasable Privacy Protection Scheme against Black-box Face Recognition Models0
A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal Trigger's Adversarial Attacks0
An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks0
Detecting and Segmenting Adversarial Graphics Patterns from Images0
Adversarial Client Detection via Non-parametric Subspace Monitoring in the Internet of Federated Things0
Evading Detection Actively: Toward Anti-Forensics against Forgery Localization0
Differential Privacy in Personalized Pricing with Nonparametric Demand Models0
An Incremental Gray-box Physical Adversarial Attack on Neural Network Training0
Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking0
3DGAA: Realistic and Robust 3D Gaussian-based Adversarial Attack for Autonomous Driving0
An Empirical Analysis of Federated Learning Models Subject to Label-Flipping Adversarial Attack0
Anomaly Detection in Unsupervised Surveillance Setting Using Ensemble of Multimodal Data with Adversarial Defense0
Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions0
An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack0
EVALOOP: Assessing LLM Robustness in Programming from a Self-consistency Perspective0
Design of secure and robust cognitive system for malware detection0
Adversarial-Aware Deep Learning System based on a Secondary Classical Machine Learning Verification Approach0
Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning0
OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives Training0
Democratic Training Against Universal Adversarial Perturbations0
Activation Learning by Local Competitions0
Distillation-Enhanced Physical Adversarial Attacks0
A Novel Deep Learning based Model to Defend Network Intrusion Detection System against Adversarial Attacks0
Enhancing Transferability of Adversarial Attacks with GE-AdvGAN+: A Comprehensive Framework for Gradient Editing0
Analyzing the Noise Robustness of Deep Neural Networks0
Show:102550
← PrevPage 13 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified