SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 351375 of 1808 papers

TitleStatusHype
Adversarial Attack and Defense for Non-Parametric Two-Sample TestsCode0
Adversarial Self-Defense for Cycle-Consistent GANsCode0
AdjointDEIS: Efficient Gradients for Diffusion ModelsCode0
Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-IdentificationCode0
FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation TechniquesCode0
Adversarial sample generation and training using geometric masks for accurate and resilient license plate character recognitionCode0
A Distributed Black-Box Adversarial Attack Based on Multi-Group Particle Swarm OptimizationCode0
FireBERT: Hardening BERT-based classifiers against adversarial attackCode0
FDA: Feature Disruptive AttackCode0
Feature Space Perturbations Yield More Transferable Adversarial ExamplesCode0
Adversarial Robustness for Visual Grounding of Multimodal Large Language ModelsCode0
Federated Zeroth-Order Optimization using Trajectory-Informed Surrogate GradientsCode0
Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and FlatnessCode0
ADef: an Iterative Algorithm to Construct Adversarial DeformationsCode0
Adversarial Robustness Analysis of Vision-Language Models in Medical Image SegmentationCode0
Fast Inference of Removal-Based Node InfluenceCode0
Fashion-Guided Adversarial Attack on Person SegmentationCode0
Adversarial Purification of Information MaskingCode0
Extending Adversarial Attacks to Produce Adversarial Class Probability DistributionsCode0
Fast Adversarial CNN-based Perturbation Attack of No-Reference Image Quality MetricsCode0
Adversarial Privacy-preserving FilterCode0
Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep LearningCode0
Adversarial Attack on Network Embeddings via Supervised Network PoisoningCode0
Role of Spatial Context in Adversarial Robustness for Object DetectionCode0
Exploiting vulnerabilities of deep neural networks for privacy protectionCode0
Show:102550
← PrevPage 15 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified