SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 12011225 of 1808 papers

TitleStatusHype
When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence0
Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning0
White-Box Target Attack for EEG-Based BCI Regression Problems0
Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks0
XSub: Explanation-Driven Adversarial Attack against Blackbox Classifiers via Feature Substitution0
Yet another but more efficient black-box adversarial attack: tiling and evolution strategies0
You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks0
Zero-Query Transfer Attacks on Context-Aware Object Detectors0
Zeroth-Order Stochastic Alternating Direction Method of Multipliers for Nonconvex Nonsmooth Optimization0
ZhichunRoad at SemEval-2022 Task 2: Adversarial Training and Contrastive Learning for Multiword Representations0
Feature-Filter: Detecting Adversarial Examples through Filtering off Recessive Features0
Feature Importance Guided Attack: A Model Agnostic Adversarial Attack0
Feature Unlearning for Pre-trained GANs and VAEs0
Feature Visualization within an Automated Design Assessment leveraging Explainable Artificial Intelligence Methods0
FedDef: Defense Against Gradient Leakage in Federated Learning-based Network Intrusion Detection Systems0
Few-Features Attack to Fool Machine Learning Models through Mask-Based GAN0
Learning Transferable Adversarial Robust Representations via Multi-view Consistency0
F&F Attack: Adversarial Attack against Multiple Object Trackers by Inducing False Negatives and False Positives0
FineFool: Fine Object Contour Attack via Attention0
FlippedRAG: Black-Box Opinion Manipulation Adversarial Attacks to Retrieval-Augmented Generation Models0
Fooling Adversarial Training with Inducing Noise0
Fooling Adversarial Training with Induction Noise0
FoolSDEdit: Deceptively Steering Your Edits Towards Targeted Attribute-aware Distribution0
Forbidden Facts: An Investigation of Competing Objectives in Llama-20
Fortify Machine Learning Production Systems: Detect and Classify Adversarial Attacks0
Show:102550
← PrevPage 49 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified