SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 351400 of 1808 papers

TitleStatusHype
Graph Adversarial Immunization for Certifiable RobustnessCode0
Adversarial Self-Defense for Cycle-Consistent GANsCode0
AdjointDEIS: Efficient Gradients for Diffusion ModelsCode0
Graph-based methods coupled with specific distributional distances for adversarial attack detectionCode0
Grey-box Adversarial Attack And Defence For Sentiment ClassificationCode0
InstructTA: Instruction-Tuned Targeted Attack for Large Vision-Language ModelsCode0
Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-IdentificationCode0
Generating Natural Language Adversarial Examples through Probability Weighted Word SaliencyCode0
Adversarial sample generation and training using geometric masks for accurate and resilient license plate character recognitionCode0
A Distributed Black-Box Adversarial Attack Based on Multi-Group Particle Swarm OptimizationCode0
Generating Textual Adversaries with Minimal PerturbationCode0
Generate synthetic samples from tabular dataCode0
Adversarial Robustness for Visual Grounding of Multimodal Large Language ModelsCode0
Generating Natural Adversarial ExamplesCode0
Generating Unrestricted 3D Adversarial Point CloudsCode0
ADef: an Iterative Algorithm to Construct Adversarial DeformationsCode0
Adversarial Robustness Analysis of Vision-Language Models in Medical Image SegmentationCode0
From Flexibility to Manipulation: The Slippery Slope of XAI EvaluationCode0
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation FrameworkCode0
Adversarial Purification of Information MaskingCode0
Rob-GAN: Generator, Discriminator, and Adversarial AttackerCode0
GenAttack: Practical Black-box Attacks with Gradient-Free OptimizationCode0
Adversarial Privacy-preserving FilterCode0
Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep LearningCode0
Adversarial Attack on Network Embeddings via Supervised Network PoisoningCode0
Role of Spatial Context in Adversarial Robustness for Object DetectionCode0
Foiling Explanations in Deep Neural NetworksCode0
Adversarial Attack on Large Language Models using Exponentiated Gradient DescentCode0
A black-box adversarial attack for poisoning clusteringCode0
Adversarial Metric Attack and Defense for Person Re-identificationCode0
FMM-Attack: A Flow-based Multi-modal Adversarial Attack on Video-based LLMsCode0
Forging and Removing Latent-Noise Diffusion Watermarks Using a Single ImageCode0
Federated Zeroth-Order Optimization using Trajectory-Informed Surrogate GradientsCode0
Feature Space Perturbations Yield More Transferable Adversarial ExamplesCode0
FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation TechniquesCode0
A Uniform Framework for Anomaly Detection in Deep Neural NetworksCode0
Functional Adversarial AttacksCode0
FireBERT: Hardening BERT-based classifiers against adversarial attackCode0
Fast Inference of Removal-Based Node InfluenceCode0
Fast Adversarial CNN-based Perturbation Attack of No-Reference Image Quality MetricsCode0
Extending Adversarial Attacks to Produce Adversarial Class Probability DistributionsCode0
Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial TextsCode0
Fashion-Guided Adversarial Attack on Person SegmentationCode0
Attention Masks Help Adversarial Attacks to Bypass Safety DetectorsCode0
Adversarial Manhole: Challenging Monocular Depth Estimation and Semantic Segmentation Models with Patch AttackCode0
Adversarial Attack on Graph Structured DataCode0
FDA: Feature Disruptive AttackCode0
Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and FlatnessCode0
Geometry-Aware Generation of Adversarial Point CloudsCode0
Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent VariablesCode0
Show:102550
← PrevPage 8 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified