SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 851900 of 1808 papers

TitleStatusHype
Physical-World Optical Adversarial Attacks on 3D Face Recognition0
Recipe2Vec: Multi-modal Recipe Representation Learning with Graph Neural NetworksCode1
Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query AttacksCode1
Phrase-level Textual Adversarial Attack with Label PreservationCode1
Adversarial Body Shape Search for Legged Robots0
Transferable Physical Attack against Object Detection with Separable Attention0
Sparse Adversarial Attack in Multi-agent Reinforcement Learning0
3D-VFD: A Victim-free Detector against 3D Adversarial Point Clouds0
Learn2Weight: Parameter Adaptation against Similar-domain Adversarial Attacks0
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin StatisticCode1
Btech thesis report on adversarial attack detection and purification of adverserially attacked images0
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems0
Rethinking Classifier and Adversarial Attack0
CE-based white-box adversarial attacks will not work using super-fitting0
BERTops: Studying BERT Representations under a Topological LensCode0
Deep-Attack over the Deep Reinforcement Learning0
Uncertainty Estimation of Transformer Predictions for Misclassification DetectionCode0
A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock PredictionsCode1
Adversarial attacks on an optical neural network0
Adversarial Fine-tune with Dynamically Regulated Adversary0
An Adversarial Attack Analysis on Malicious Advertisement URL Detection FrameworkCode0
Boosting Adversarial Transferability of MLP-Mixer0
Restricted Black-box Adversarial Attack Against DeepFake Face Swapping0
Self-recoverable Adversarial Examples: A New Effective Protection Mechanism in Social NetworksCode1
Mixed Strategies for Security Games with General Defending Requirements0
Improving Deep Learning Model Robustness Against Adversarial Attack by Increasing the Network Capacity0
Smart App Attack: Hacking Deep Learning Models in Android AppsCode1
Enhancing the Transferability via Feature-Momentum Adversarial Attack0
How Sampling Impacts the Robustness of Stochastic Neural Networks0
A Mask-Based Adversarial Defense Scheme0
Testing robustness of predictions of trained classifiers against naturally occurring perturbations0
Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors0
CgAT: Center-Guided Adversarial Training for Deep Hashing-Based RetrievalCode1
UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of Perturbed Samples0
Residue-Based Natural Language Adversarial Attack DetectionCode0
Homomorphic Encryption and Federated Learning based Privacy-Preserving CNN Training: COVID-19 Detection Use-Case0
From Environmental Sound Representation to Robustness of 2D CNN Models Against Adversarial Attacks0
Anti-Adversarially Manipulated Attributions for Weakly Supervised Semantic Segmentation and Object Localization0
Hear No Evil: Towards Adversarial Robustness of Automatic Speech Recognition via Multi-Task Learning0
SecureSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition0
Adversarial Neon Beam: A Light-based Physical Attack to DNNs0
Fusing Event-based and RGB camera for Robust Object Detection in Adverse ConditionsCode1
StyleFool: Fooling Video Classification Systems via Style TransferCode1
Exploring Frequency Adversarial Attacks for Face Forgery Detection0
Zero-Query Transfer Attacks on Context-Aware Object Detectors0
Boosting Black-Box Adversarial Attacks with Meta Learning0
Text Adversarial Purification as Defense against Adversarial Attacks0
A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies0
Enhancing Transferability of Adversarial Examples with Spatial Momentum0
A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical FlowCode1
Show:102550
← PrevPage 18 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified