SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 13011350 of 1808 papers

TitleStatusHype
Universal Adversarial Perturbations and Image Spam Classifiers0
Towards Evaluating the Robustness of Deep Diagnostic Models by Adversarial AttackCode0
A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models0
Evaluating the Robustness of Geometry-Aware Instance-Reweighted Adversarial TrainingCode0
A Brief Survey on Deep Learning Based Data Hiding0
Model-Agnostic Defense for Lane Detection against Adversarial AttackCode0
Graphfool: Targeted Label Adversarial Attack on Graph Embedding0
Fortify Machine Learning Production Systems: Detect and Classify Adversarial Attacks0
CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification0
Certifiably Robust Variational Autoencoders0
Adversarial Attack on Network Embeddings via Supervised Network PoisoningCode0
Adversarially robust deepfake media detection using fused convolutional neural network predictions0
Enhancing Real-World Adversarial Patches through 3D Modeling of Complex Target ScenesCode0
RoBIC: A benchmark suite for assessing classifiers robustnessCode0
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples0
Audio Adversarial Examples: Attacks Using Vocal Masks0
Improving Neural Network Robustness through Neighborhood Preserving Layers0
Visual explanation of black-box model: Similarity Difference and Uniqueness (SIDU) methodCode0
Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object Detection Models0
Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems0
Towards Practical Robustness Analysis for DNNs based on PAC-Model LearningCode0
Generating Black-Box Adversarial Examples in Sparse Domain0
PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack0
Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization0
Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions0
Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds0
Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series0
Random Transformation of Image Brightness for Adversarial AttackCode0
Exploring Adversarial Fake Images on Face Manifold0
Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks0
Robust Text CAPTCHAs Using Adversarial Examples0
Local Competition and Stochasticity for Adversarial Robustness in Deep Learning0
Towards Robustness of Deep Neural Networks via Regularization0
Consistency-Sensitivity Guided Ensemble Black-Box Adversarial Attacks in Low-Dimensional Spaces0
Adversarial Attack on Deep Cross-Modal Hamming Retrieval0
Learn2Weight: Weights Transfer Defense against Similar-domain Adversarial Attacks0
Black-Box Adversarial Attacks on Graph Neural Networks as An Influence Maximization Problem0
Stabilized Medical Attacks0
Identifying Informative Latent Variables Learned by GIN via Mutual Information0
Practical Order Attack in Deep Ranking0
Meta-Attack: Class-Agnostic and Model-Agnostic Physical Adversarial Attack0
AT-GAN: An Adversarial Generative Model for Non-constrained Adversarial Examples0
Adversarial Example Detection Using Latent Neighborhood Graph0
An Adversarial Attack via Feature Contributive Regions0
Black-box Adversarial Attacks on Monocular Depth Estimation Using Evolutionary Multi-objective Optimization0
Adjust-free adversarial example generation in speech recognition using evolutionary multi-objective optimization under black-box condition0
Blurring Fools the Network -- Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring0
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks0
Variational Quantum Cloning: Improving Practicality for Quantum Cryptanalysis0
A Hierarchical Feature Constraint to Camouflage Medical Adversarial AttacksCode0
Show:102550
← PrevPage 27 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified