SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 801850 of 1808 papers

TitleStatusHype
Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey0
ADMM based Distributed State Observer Design under Sparse Sensor Attacks0
Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks0
Boosting Adversarial Transferability through Enhanced Momentum0
Boosting Adversarial Transferability of MLP-Mixer0
Adversarial training with perturbation generator networks0
Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey0
Blurring Fools the Network -- Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring0
Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples0
blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness0
Adversarial Threat Vectors and Risk Mitigation for Retrieval-Augmented Generation Systems0
Adjust-free adversarial example generation in speech recognition using evolutionary multi-objective optimization under black-box condition0
Black-box Targeted Adversarial Attack on Segment Anything (SAM)0
Black-Box Sparse Adversarial Attack via Multi-Objective Optimisation0
Socialbots on Fire: Modeling Adversarial Behaviors of Socialbots via Multi-Agent Hierarchical Reinforcement Learning0
Black-Box Decision based Adversarial Attack with Symmetric α-stable Distribution0
Black-box Adversarial ML Attack on Modulation Classification0
Adversarial Semantic and Label Perturbation Attack for Pedestrian Attribute Recognition0
Black-box Adversarial Attacks on Monocular Depth Estimation Using Evolutionary Multi-objective Optimization0
Adversarial Attacks and Defences for Skin Cancer Classification0
A Brief Survey on Deep Learning Based Data Hiding0
Boosting Adversarial Transferability for Hyperspectral Image Classification Using 3D Structure-invariant Transformation and Intermediate Feature Distance0
Adversarial Attack for Asynchronous Event-based Data0
Black-Box Adversarial Attacks on Graph Neural Networks as An Influence Maximization Problem0
Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information0
Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method0
Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving0
Adversarial Sampling for Fairness Testing in Deep Neural Network0
Biologically inspired protection of deep networks from adversarial attacks0
Bio-Inspired Adversarial Attack Against Deep Neural Networks0
Adversarial Attacks against Deep Saliency Models0
Bias Field Poses a Threat to DNN-based X-Ray Recognition0
BiasAdv: Bias-Adversarial Augmentation for Model Debiasing0
Adversarial Robustness through Dynamic Ensemble Learning0
Beyond Score Changes: Adversarial Attack on No-Reference Image Quality Assessment from Two Perspectives0
Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons0
Adversarial Attacks Against Deep Learning Systems for ICD-9 Code Assignment0
Beyond Dropout: Robust Convolutional Neural Networks Based on Local Feature Masking0
Beyond Classification: Evaluating Diffusion Denoised Smoothing for Security-Utility Trade off0
Adversarial Robustness for Machine Learning Cyber Defenses Using Log Data0
A Differentiable Language Model Adversarial Attack on Text Classifiers0
A Branch and Bound Framework for Stronger Adversarial Attacks of ReLU Networks0
Best Practices for Noise-Based Augmentation to Improve the Performance of Deployable Speech-Based Emotion Recognition Systems0
Adversarial Robustness for Deep Learning-based Wildfire Prediction Models0
Benign Adversarial Attack: Tricking Models for Goodness0
Generating Semantically Valid Adversarial Questions for TableQA0
Benchmarking the Physical-world Adversarial Robustness of Vehicle Detection0
Adversarial Relighting Against Face Recognition0
AdversariaL attacK sAfety aLIgnment(ALKALI): Safeguarding LLMs through GRACE: Geometric Representation-Aware Contrastive Enhancement- Introducing Adversarial Vulnerability Quality Index (AVQI)0
Generating Semantic Adversarial Examples via Feature Manipulation0
Show:102550
← PrevPage 17 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified