SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 351400 of 1808 papers

TitleStatusHype
Adversary for Social Good: Leveraging Adversarial Attacks to Protect Personal Attribute Privacy0
Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs0
Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey0
ADMM based Distributed State Observer Design under Sparse Sensor Attacks0
CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification0
Adversarial training with perturbation generator networks0
Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey0
Adjust-free adversarial example generation in speech recognition using evolutionary multi-objective optimization under black-box condition0
Adversarial Threat Vectors and Risk Mitigation for Retrieval-Augmented Generation Systems0
Can the state of relevant neurons in a deep neural networks serve as indicators for detecting adversarial attacks?0
Socialbots on Fire: Modeling Adversarial Behaviors of Socialbots via Multi-Agent Hierarchical Reinforcement Learning0
Adversarial Semantic and Label Perturbation Attack for Pedestrian Attribute Recognition0
A Brief Survey on Deep Learning Based Data Hiding0
Adversarial Attacks and Defences for Skin Cancer Classification0
Boosting Adversarial Transferability for Hyperspectral Image Classification Using 3D Structure-invariant Transformation and Intermediate Feature Distance0
CAG: A Real-time Low-cost Enhanced-robustness High-transferability Content-aware Adversarial Attack Generator0
Can We Really Trust Explanations? Evaluating the Stability of Feature Attribution Explanation Methods via Adversarial Attack0
Capsule Neural Networks as Noise Stabilizer for Time Series Data0
Chain Association-based Attacking and Shielding Natural Language Processing Systems0
Adversarial Sampling for Fairness Testing in Deep Neural Network0
Making Corgis Important for Honeycomb Classification: Adversarial Attacks on Concept-based Explainability Tools0
Adversarial Attacks against Deep Saliency Models0
A Branch and Bound Framework for Stronger Adversarial Attacks of ReLU Networks0
BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial Attack0
Adversarial Robustness through Dynamic Ensemble Learning0
Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons0
Adversarial Attacks Against Deep Learning Systems for ICD-9 Code Assignment0
Adversarial Robustness for Machine Learning Cyber Defenses Using Log Data0
A Differentiable Language Model Adversarial Attack on Text Classifiers0
Mitigating Deep Learning Vulnerabilities from Adversarial Examples Attack in the Cybersecurity Domain0
Btech thesis report on adversarial attack detection and purification of adverserially attacked images0
Adversarial Robustness for Deep Learning-based Wildfire Prediction Models0
AdversariaL attacK sAfety aLIgnment(ALKALI): Safeguarding LLMs through GRACE: Geometric Representation-Aware Contrastive Enhancement- Introducing Adversarial Vulnerability Quality Index (AVQI)0
Adversarial Relighting Against Face Recognition0
A Deep Genetic Programming based Methodology for Art Media Classification Robust to Adversarial Perturbations0
Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework0
Adversarial RAW: Image-Scaling Attack Against Imaging Pipeline0
Adversarial Attack on Skeleton-based Human Action Recognition0
Adversarial Profiles: Detecting Out-Distribution & Adversarial Samples in Pre-trained CNNs0
Adversarial Attack on Sentiment Classification0
A Black-Box Attack on Optical Character Recognition Systems0
Brightness-Restricted Adversarial Attack Patch0
BufferSearch: Generating Black-Box Adversarial Texts With Lower Queries0
Mitigating the Impact of Noisy Edges on Graph-Based Algorithms via Adversarial Robustness Evaluation0
Adversarial Patch Attacks on Monocular Depth Estimation Networks0
Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack0
Adversarial optimization leads to over-optimistic security-constrained dispatch, but sampling can help0
Adversarial Neon Beam: A Light-based Physical Attack to DNNs0
Adaptive Perturbation for Adversarial Attack0
Adversarial Music: Real World Audio Adversary Against Wake-word Detection System0
Show:102550
← PrevPage 8 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified