SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 13261350 of 1808 papers

TitleStatusHype
Real-World Adversarial Examples involving Makeup Application0
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective0
Reasoning Chain Based Adversarial Attack for Multi-hop Question Answering0
Text Adversarial Purification as Defense against Adversarial Attacks0
Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack0
Towards Safer Generative Language Models: A Survey on Safety Risks, Evaluations, and Improvements0
Adversarial Attack with Raindrops0
RecUP-FL: Reconciling Utility and Privacy in Federated Learning via User-configurable Privacy Defense0
Redefining Machine Unlearning: A Conformal Prediction-Motivated Approach0
Adaptive Local Adversarial Attacks on 3D Point Clouds for Augmented Reality0
Refining Adaptive Zeroth-Order Optimization at Ease0
Region-Wise Attack: On Efficient Generation of Robust Physical Adversarial Examples0
Reinforce Attack: Adversarial Attack against BERT with Reinforcement Learning0
Reinforcement Learning Based Sparse Black-box Adversarial Attack on Video Recognition Models0
ReLATE: Resilient Learner Selection for Multivariate Time-Series Classification Against Adversarial Attacks0
Replace-then-Perturb: Targeted Adversarial Attacks With Visual Reasoning for Vision-Language Models0
Residue-Based Natural Language Adversarial Attack Detection0
Transferable Adversarial Attack for Both Vision Transformers and Convolutional Networks via Momentum Integrated Gradients0
Transferable Adversarial Attack on Image Tampering Localization0
Resilient and constrained consensus against adversarial attacks: A distributed MPC framework0
Resilient Dynamic Average Consensus based on Trusted agents0
Resilient Learning-Based Control for Synchronization of Passive Multi-Agent Systems under Attack0
Adaptive Adversarial Training Does Not Increase Recourse Costs0
Resisting Graph Adversarial Attack via Cooperative Homophilous Augmentation0
Restricted Black-box Adversarial Attack Against DeepFake Face Swapping0
Show:102550
← PrevPage 54 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified