SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 12011250 of 1808 papers

TitleStatusHype
A Survey On Universal Adversarial AttackCode1
Evaluating the Robustness of Geometry-Aware Instance-Reweighted Adversarial TrainingCode0
Model-Agnostic Defense for Lane Detection against Adversarial AttackCode0
Fast Minimum-norm Adversarial Attacks through Adaptive Norm ConstraintsCode2
Graphfool: Targeted Label Adversarial Attack on Graph Embedding0
Targeted Attack against Deep Neural Networks via Flipping Limited Weight BitsCode1
On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-LearningCode1
Fortify Machine Learning Production Systems: Detect and Classify Adversarial Attacks0
CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification0
Certifiably Robust Variational Autoencoders0
Adversarial Attack on Network Embeddings via Supervised Network PoisoningCode0
Adversarially robust deepfake media detection using fused convolutional neural network predictions0
Enhancing Real-World Adversarial Patches through 3D Modeling of Complex Target ScenesCode0
RoBIC: A benchmark suite for assessing classifiers robustnessCode0
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples0
Audio Adversarial Examples: Attacks Using Vocal Masks0
Improving Neural Network Robustness through Neighborhood Preserving Layers0
Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object Detection Models0
Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems0
Visual explanation of black-box model: Similarity Difference and Uniqueness (SIDU) methodCode0
Towards Practical Robustness Analysis for DNNs based on PAC-Model LearningCode0
Generating Black-Box Adversarial Examples in Sparse Domain0
Robust Reinforcement Learning on State Observations with Learned Optimal AdversaryCode1
PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack0
Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization0
Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions0
Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds0
Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series0
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android AppsCode1
Random Transformation of Image Brightness for Adversarial AttackCode0
Exploring Adversarial Fake Images on Face Manifold0
Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks0
Robust Text CAPTCHAs Using Adversarial Examples0
Local Competition and Stochasticity for Adversarial Robustness in Deep Learning0
Towards Robustness of Deep Neural Networks via Regularization0
Meta-Attack: Class-Agnostic and Model-Agnostic Physical Adversarial Attack0
Adversarial Attack on Deep Cross-Modal Hamming Retrieval0
Consistency-Sensitivity Guided Ensemble Black-Box Adversarial Attacks in Low-Dimensional Spaces0
Adversarial Example Detection Using Latent Neighborhood Graph0
Stabilized Medical Attacks0
Learn2Weight: Weights Transfer Defense against Similar-domain Adversarial Attacks0
Identifying Informative Latent Variables Learned by GIN via Mutual Information0
An Adversarial Attack via Feature Contributive Regions0
Practical Order Attack in Deep Ranking0
Black-Box Adversarial Attacks on Graph Neural Networks as An Influence Maximization Problem0
AT-GAN: An Adversarial Generative Model for Non-constrained Adversarial Examples0
Patch-wise++ Perturbation for Adversarial Targeted AttacksCode1
Black-box Adversarial Attacks on Monocular Depth Estimation Using Evolutionary Multi-objective Optimization0
Sparse Adversarial Attack to Object DetectionCode1
Adjust-free adversarial example generation in speech recognition using evolutionary multi-objective optimization under black-box condition0
Show:102550
← PrevPage 25 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified