SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 551600 of 1808 papers

TitleStatusHype
GenAttack: Practical Black-box Attacks with Gradient-Free OptimizationCode0
Rob-GAN: Generator, Discriminator, and Adversarial AttackerCode0
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation FrameworkCode0
From Flexibility to Manipulation: The Slippery Slope of XAI EvaluationCode0
Defending against Whitebox Adversarial Attacks via Randomized DiscretizationCode0
An adversarial attack approach for eXplainable AI evaluation on deepfake detection modelsCode0
An Adversarial Attack Analysis on Malicious Advertisement URL Detection FrameworkCode0
Generate synthetic samples from tabular dataCode0
An Evasion Attack against Stacked Capsule AutoencoderCode0
Foiling Explanations in Deep Neural NetworksCode0
An Adversarial Approach for Explaining the Predictions of Deep Neural NetworksCode0
FireBERT: Hardening BERT-based classifiers against adversarial attackCode0
FMM-Attack: A Flow-based Multi-modal Adversarial Attack on Video-based LLMsCode0
Forging and Removing Latent-Noise Diffusion Watermarks Using a Single ImageCode0
DeepFool: a simple and accurate method to fool deep neural networksCode0
Adversarial Attacks on Spiking Convolutional Neural Networks for Event-based VisionCode0
Feature Space Perturbations Yield More Transferable Adversarial ExamplesCode0
A Multi-task Adversarial Attack Against Face AuthenticationCode0
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and DefensesCode0
Decorrelative Network Architecture for Robust Electrocardiogram ClassificationCode0
FDA: Feature Disruptive AttackCode0
Federated Zeroth-Order Optimization using Trajectory-Informed Surrogate GradientsCode0
Generating Natural Adversarial ExamplesCode0
Adversarial Attacks on Parts of Speech: An Empirical Study in Text-to-Image GenerationCode0
Decision-based Universal Adversarial AttackCode0
Fashion-Guided Adversarial Attack on Person SegmentationCode0
Decision-BADGE: Decision-based Adversarial Batch Attack with Directional Gradient EstimationCode0
Extending Adversarial Attacks to Produce Adversarial Class Probability DistributionsCode0
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGACode0
Amoeba: Circumventing ML-supported Network Censorship via Adversarial Reinforcement LearningCode0
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset DistillationCode0
Exploiting vulnerabilities of deep neural networks for privacy protectionCode0
Deep generative models as an adversarial attack strategy for tabular machine learningCode0
FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation TechniquesCode0
Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial TextsCode0
Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and FlatnessCode0
Fast Adversarial CNN-based Perturbation Attack of No-Reference Image Quality MetricsCode0
Data-Driven Subsampling in the Presence of an Adversarial ActorCode0
Data-Driven Falsification of Cyber-Physical SystemsCode0
Explainable and Safe Reinforcement Learning for Autonomous Air MobilityCode0
Excess Capacity and Backdoor PoisoningCode0
Adversarial Attacks on Large Language Models Using Regularized RelaxationCode0
Expanding Scope: Adapting English Adversarial Attacks to ChineseCode0
Explainable Graph Neural Networks Under FireCode0
Adversarial Attack via Dual-Stage Network ErosionCode0
Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance SacrificeCode0
Defending Substitution-Based Profile Pollution Attacks on Sequential RecommendersCode0
DAmageNet: A Universal Adversarial DatasetCode0
EvoBA: An Evolution Strategy as a Strong Baseline forBlack-Box Adversarial AttacksCode0
Exacerbating Algorithmic Bias through Fairness AttacksCode0
Show:102550
← PrevPage 12 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified