SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 701750 of 1808 papers

TitleStatusHype
Controversial stimuli: pitting neural networks against each other as models of human recognitionCode0
AICAttack: Adversarial Image Captioning Attack with Attention-Based OptimizationCode0
From Flexibility to Manipulation: The Slippery Slope of XAI EvaluationCode0
Accelerating Monte Carlo Bayesian Inference via Approximating Predictive Uncertainty over SimplexCode0
Enhancing Adversarial Robustness with Conformal Prediction: A Framework for Guaranteed Model ReliabilityCode0
A Hierarchical Feature Constraint to Camouflage Medical Adversarial AttacksCode0
Forging and Removing Latent-Noise Diffusion Watermarks Using a Single ImageCode0
Foiling Explanations in Deep Neural NetworksCode0
Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and FlatnessCode0
Enhancing Real-World Adversarial Patches through 3D Modeling of Complex Target ScenesCode0
Improving Sequence Modeling Ability of Recurrent Neural Networks via SememesCode0
Hidden Activations Are Not Enough: A General Approach to Neural Network PredictionsCode0
Real-world adversarial attack on MTCNN face detection systemCode0
FMM-Attack: A Flow-based Multi-modal Adversarial Attack on Video-based LLMsCode0
FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation TechniquesCode0
Resilience of Named Entity Recognition Models under Adversarial AttackCode0
Feature Space Perturbations Yield More Transferable Adversarial ExamplesCode0
Federated Zeroth-Order Optimization using Trajectory-Informed Surrogate GradientsCode0
FireBERT: Hardening BERT-based classifiers against adversarial attackCode0
Fast Inference of Removal-Based Node InfluenceCode0
Adversarial Attacks on Data AttributionCode0
Fashion-Guided Adversarial Attack on Person SegmentationCode0
A Targeted Universal Attack on Graph Convolutional NetworkCode0
Revisiting DeepFool: generalization and improvementCode0
Physics-constrained Attack against Convolution-based Human Motion PredictionCode0
Combining Generators of Adversarial Malware Examples to Increase Evasion RateCode0
ColorFool: Semantic Adversarial ColorizationCode0
Robust Fair Clustering: A Novel Fairness Attack and Defense FrameworkCode0
A Theoretical View of Linear Backpropagation and Its ConvergenceCode0
Fast Adversarial CNN-based Perturbation Attack of No-Reference Image Quality MetricsCode0
FDA: Feature Disruptive AttackCode0
A Game-Based Approximate Verification of Deep Neural Networks with Provable GuaranteesCode0
Evaluating the Robustness of Geometry-Aware Instance-Reweighted Adversarial TrainingCode0
A Frank-Wolfe Framework for Efficient and Effective Adversarial AttacksCode0
Class-Conditioned Transformation for Enhanced Robust Image ClassificationCode0
Exploiting vulnerabilities of deep neural networks for privacy protectionCode0
Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial TextsCode0
Classification-by-Components: Probabilistic Modeling of Reasoning over a Set of ComponentsCode0
Explainable Graph Neural Networks Under FireCode0
EvoBA: An Evolution Strategy as a Strong Baseline forBlack-Box Adversarial AttacksCode0
Explaining Adversarial Robustness of Neural Networks from Clustering Effect PerspectiveCode0
Exacerbating Algorithmic Bias through Fairness AttacksCode0
Explainable and Safe Reinforcement Learning for Autonomous Air MobilityCode0
Extending Adversarial Attacks to Produce Adversarial Class Probability DistributionsCode0
GenAttack: Practical Black-box Attacks with Gradient-Free OptimizationCode0
Excess Capacity and Backdoor PoisoningCode0
Improved Network Robustness with Adversary CriticCode0
CharBot: A Simple and Effective Method for Evading DGA Classifiers0
A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment0
Channel Effects on Surrogate Models of Adversarial Attacks against Wireless Signal Classifiers0
Show:102550
← PrevPage 15 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified