SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 376400 of 1808 papers

TitleStatusHype
Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons0
Adversarial Attacks Against Deep Learning Systems for ICD-9 Code Assignment0
Adversarial Robustness for Machine Learning Cyber Defenses Using Log Data0
A Differentiable Language Model Adversarial Attack on Text Classifiers0
Mitigating Deep Learning Vulnerabilities from Adversarial Examples Attack in the Cybersecurity Domain0
Btech thesis report on adversarial attack detection and purification of adverserially attacked images0
Adversarial Robustness for Deep Learning-based Wildfire Prediction Models0
AdversariaL attacK sAfety aLIgnment(ALKALI): Safeguarding LLMs through GRACE: Geometric Representation-Aware Contrastive Enhancement- Introducing Adversarial Vulnerability Quality Index (AVQI)0
Adversarial Relighting Against Face Recognition0
A Deep Genetic Programming based Methodology for Art Media Classification Robust to Adversarial Perturbations0
Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework0
Adversarial RAW: Image-Scaling Attack Against Imaging Pipeline0
Adversarial Attack on Skeleton-based Human Action Recognition0
Adversarial Profiles: Detecting Out-Distribution & Adversarial Samples in Pre-trained CNNs0
Adversarial Attack on Sentiment Classification0
A Black-Box Attack on Optical Character Recognition Systems0
Brightness-Restricted Adversarial Attack Patch0
BufferSearch: Generating Black-Box Adversarial Texts With Lower Queries0
Mitigating the Impact of Noisy Edges on Graph-Based Algorithms via Adversarial Robustness Evaluation0
Adversarial Patch Attacks on Monocular Depth Estimation Networks0
Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack0
Adversarial optimization leads to over-optimistic security-constrained dispatch, but sampling can help0
Adversarial Neon Beam: A Light-based Physical Attack to DNNs0
Adaptive Perturbation for Adversarial Attack0
Adversarial Music: Real World Audio Adversary Against Wake-word Detection System0
Show:102550
← PrevPage 16 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified