SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 851900 of 1808 papers

TitleStatusHype
Generating Valid and Natural Adversarial Examples with Large Language Models0
Generating Watermarked Adversarial Texts0
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks0
Generative Adversarial Patches for Physical Attacks on Cross-Modal Pedestrian Re-Identification0
Improving the Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator0
Defending Against Adversarial Examples by Regularized Deep Embedding0
Global Robustness Verification Networks0
Golden Ratio Search: A Low-Power Adversarial Attack for Deep Learning based Modulation Classification0
Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training0
Improving Robustness of Malware Classifiers using Adversarial Strings Generated from Perturbed Latent Representations0
Gradient-guided Unsupervised Text Style Transfer via Contrastive Learning0
Evaluating the Robustness of the "Ensemble Everything Everywhere" Defense0
Improving Transferable Targeted Adversarial Attack via Normalized Logit Calibration and Truncated Feature Mixing0
GradMDM: Adversarial Attack on Dynamic Networks0
Beyond Dropout: Robust Convolutional Neural Networks Based on Local Feature Masking0
Graphfool: Targeted Label Adversarial Attack on Graph Embedding0
Defending Against Adversarial Attack in ECG Classification with Adversarial Distillation Training0
GraphMU: Repairing Robustness of Graph Neural Networks via Machine Unlearning0
A Differentiable Language Model Adversarial Attack on Text Classifiers0
Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents0
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems0
Deep-RBF Networks Revisited: Robust Classification with Rejection0
DeepPeep: Exploiting Design Ramifications to Decipher the Architecture of Compact DNNs0
Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons0
An Adversarial Approach to Evaluating the Robustness of Event Identification Models0
Improving Network Interpretability via Explanation Consistency Evaluation0
Bias Field Poses a Threat to DNN-based X-Ray Recognition0
Deep Learning for Robust and Explainable Models in Computer Vision0
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment0
Harmonic Adversarial Attack Method0
Harmonicity Plays a Critical Role in DNN Based Versus in Biologically-Inspired Monaural Speech Segregation Systems0
Biologically inspired protection of deep networks from adversarial attacks0
Adversarial Attacks on Traffic Sign Recognition: A Survey0
Deep Learning-based Multi-Organ CT Segmentation with Adversarial Data Augmentation0
Hear No Evil: Towards Adversarial Robustness of Automatic Speech Recognition via Multi-Task Learning0
Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds0
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks0
Adversarial Attack Against Images Classification based on Generative Adversarial Networks0
Heterogeneous Multi-Player Multi-Armed Bandits Robust To Adversarial Attacks0
Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method0
HGAttack: Transferable Heterogeneous Graph Adversarial Attack0
Improving Gradient-based Adversarial Training for Text Classification by Contrastive Learning and Auto-Encoder0
Hiding Backdoors within Event Sequence Data via Poisoning Attacks0
Improving Neural Network Robustness through Neighborhood Preserving Layers0
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems0
Boosting Adversarial Transferability for Hyperspectral Image Classification Using 3D Structure-invariant Transformation and Intermediate Feature Distance0
Hijacking Vision-and-Language Navigation Agents with Adversarial Environmental Attacks0
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems0
Enhancing Transferability of Adversarial Examples with Spatial Momentum0
An Actor-Critic Method for Simulation-Based Optimization0
Show:102550
← PrevPage 18 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified