SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 15011550 of 1808 papers

TitleStatusHype
Bio-Inspired Adversarial Attack Against Deep Neural Networks0
Biologically inspired protection of deep networks from adversarial attacks0
SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner0
SELF-KNOWLEDGE DISTILLATION ADVERSARIAL ATTACK0
Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving0
Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method0
Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information0
Black-Box Adversarial Attacks on Graph Neural Networks as An Influence Maximization Problem0
Black-box Adversarial Attacks on Monocular Depth Estimation Using Evolutionary Multi-objective Optimization0
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey0
Self-Supervised Adversarial Example Detection by Disentangled Representation0
Attention, Please! Adversarial Defense via Activation Rectification and Preservation0
Black-box Adversarial ML Attack on Modulation Classification0
Black-Box Decision based Adversarial Attack with Symmetric α-stable Distribution0
Black-Box Sparse Adversarial Attack via Multi-Objective Optimisation0
Black-box Targeted Adversarial Attack on Segment Anything (SAM)0
blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness0
Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples0
Enhancing Transformation-based Defenses using a Distribution Classifier0
Blurring Fools the Network -- Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring0
Self-Supervised Contrastive Learning with Adversarial Perturbations for Robust Pretrained Language Models0
Self-Supervised Representation Learning for Adversarial Attack Detection0
Boosting Adversarial Transferability of MLP-Mixer0
Boosting Adversarial Transferability through Enhanced Momentum0
Boosting Adversarial Transferability using Dynamic Cues0
Semantic Adversarial Attacks on Face Recognition through Significant Attributes0
Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization0
Boosting Adversarial Transferability via High-Frequency Augmentation and Hierarchical-Gradient Fusion0
Boosting Black-Box Adversarial Attacks with Meta Learning0
Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers0
Boosting Decision-Based Black-Box Adversarial Attack with Gradient Priors0
Attack Type Agnostic Perceptual Enhancement of Adversarial Images0
Attack Tree Analysis for Adversarial Evasion Attacks0
Attack to Fool and Explain Deep Networks0
Attacks on State-of-the-Art Face Recognition using Attentional Adversarial Attack Generative Network0
Adversarial Attacks for Multi-view Deep Models0
Adversarial Attacks and Mitigation for Anomaly Detectors of Cyber-Physical Systems0
Semantic Autoencoder and Its Potential Usage for Adversarial Attack0
Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack0
Bregman Linearized Augmented Lagrangian Method for Nonconvex Constrained Stochastic Zeroth-order Optimization0
Attack-SAM: Towards Attacking Segment Anything Model With Adversarial Examples0
Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework0
UNICAD: A Unified Approach for Attack Detection, Noise Reduction and Novel Class Identification0
Brightness-Restricted Adversarial Attack Patch0
Making Corgis Important for Honeycomb Classification: Adversarial Attacks on Concept-based Explainability Tools0
BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial Attack0
Btech thesis report on adversarial attack detection and purification of adverserially attacked images0
BufferSearch: Generating Black-Box Adversarial Texts With Lower Queries0
Adversarial Attacks and Dimensionality in Text Classifiers0
CAAD 2018: Iterative Ensemble Adversarial Attack0
Show:102550
← PrevPage 31 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified