SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 401450 of 1808 papers

TitleStatusHype
Hard-label based Small Query Black-box Adversarial AttackCode0
Adversarial Attack on Graph Structured DataCode0
Adversarial Manhole: Challenging Monocular Depth Estimation and Semantic Segmentation Models with Patch AttackCode0
Geometry-Aware Generation of Adversarial Point CloudsCode0
A White-Box False Positive Adversarial Attack Method on Contrastive Loss Based Offline Handwritten Signature Verification ModelsCode0
Graph Adversarial Immunization for Certifiable RobustnessCode0
Generate synthetic samples from tabular dataCode0
A Uniform Framework for Anomaly Detection in Deep Neural NetworksCode0
Functional Adversarial AttacksCode0
Generating Natural Adversarial ExamplesCode0
From Flexibility to Manipulation: The Slippery Slope of XAI EvaluationCode0
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation FrameworkCode0
Adversarial Privacy-preserving FilterCode0
Adversarial Attack on Network Embeddings via Supervised Network PoisoningCode0
Rob-GAN: Generator, Discriminator, and Adversarial AttackerCode0
Forging and Removing Latent-Noise Diffusion Watermarks Using a Single ImageCode0
Attention Masks Help Adversarial Attacks to Bypass Safety DetectorsCode0
GenAttack: Practical Black-box Attacks with Gradient-Free OptimizationCode0
Generating Natural Language Adversarial Examples through Probability Weighted Word SaliencyCode0
Is AmI (Attacks Meet Interpretability) Robust to Adversarial Examples?Code0
FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation TechniquesCode0
Hierarchical Perceptual Noise Injection for Social Media Fingerprint Privacy ProtectionCode0
Federated Zeroth-Order Optimization using Trajectory-Informed Surrogate GradientsCode0
FireBERT: Hardening BERT-based classifiers against adversarial attackCode0
Attack Transferability Characterization for Adversarially Robust Multi-label ClassificationCode0
Feature Space Perturbations Yield More Transferable Adversarial ExamplesCode0
Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and FlatnessCode0
BERTops: Studying BERT Representations under a Topological LensCode0
FDA: Feature Disruptive AttackCode0
Adversarial Attack Generation Empowered by Min-Max OptimizationCode0
Adaptive Image Transformations for Transfer-based Adversarial AttackCode0
Fast Inference of Removal-Based Node InfluenceCode0
Beyond Hard Samples: Robust and Effective Grammatical Error Correction with Cycle Self-AugmentingCode0
Beyond Model Interpretability: On the Faithfulness and Adversarial Robustness of Contrastive Textual ExplanationsCode0
Adversarial Laser Spot: Robust and Covert Physical-World Attack to DNNsCode0
Fast Adversarial CNN-based Perturbation Attack of No-Reference Image Quality MetricsCode0
Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine LearningCode0
Fashion-Guided Adversarial Attack on Person SegmentationCode0
FMM-Attack: A Flow-based Multi-modal Adversarial Attack on Video-based LLMsCode0
A Distributed Black-Box Adversarial Attack Based on Multi-Group Particle Swarm OptimizationCode0
BitAbuse: A Dataset of Visually Perturbed Texts for Defending Phishing AttacksCode0
Bitstream Collisions in Neural Image Compression via Adversarial PerturbationsCode0
Exploiting vulnerabilities of deep neural networks for privacy protectionCode0
A Theoretical View of Linear Backpropagation and Its ConvergenceCode0
Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-IdentificationCode0
Visual explanation of black-box model: Similarity Difference and Uniqueness (SIDU) methodCode0
Explainable Graph Neural Networks Under FireCode0
Black-box Adversarial Attacks on Network-wide Multi-step Traffic State Prediction ModelsCode0
Explaining Adversarial Robustness of Neural Networks from Clustering Effect PerspectiveCode0
Excess Capacity and Backdoor PoisoningCode0
Show:102550
← PrevPage 9 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified