SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 726750 of 1808 papers

TitleStatusHype
Are AlphaZero-like Agents Robust to Adversarial Perturbations?Code1
Contrastive Weighted Learning for Near-Infrared Gaze Estimation0
Logits are predictive of network typeCode0
Rethinking and Improving Robustness of Convolutional Neural Networks: a Shapley Value-based Approach in Frequency DomainCode1
Rethinking Image Restoration for Object DetectionCode1
Universal Perturbation Attack on Differentiable No-Reference Image- and Video-Quality MetricsCode1
Character-level White-Box Adversarial Attacks against Transformers via Attachable Subwords SubstitutionCode1
Symmetric Saliency-based Adversarial Attack To Speaker Identification0
Improving the Transferability of Adversarial Attacks on Face Recognition with Beneficial Perturbation Feature Augmentation0
TASA: Deceiving Question Answering Models by Twin Answer Sentences AttackCode0
LP-BFGS attack: An adversarial attack based on the Hessian with limited pixelsCode0
A White-Box Adversarial Attack Against a Digital Twin0
TAPE: Assessing Few-shot Russian Language UnderstandingCode0
Similarity of Neural Architectures using Adversarial Attack Transferability0
Effective Targeted Attacks for Adversarial Self-Supervised Learning0
Learning Transferable Adversarial Robust Representations via Multi-view Consistency0
Probabilistic Categorical Adversarial Attack & Adversarial Training0
Beyond Model Interpretability: On the Faithfulness and Adversarial Robustness of Contrastive Textual ExplanationsCode0
Object-Attentional Untargeted Adversarial Attack0
Dynamics-aware Adversarial Attack of Adaptive Neural NetworksCode0
AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks through Accuracy GradientCode0
Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition0
Boosting the Transferability of Adversarial Attacks with Reverse Adversarial PerturbationCode1
Adversarial Attack Against Image-Based Localization Neural Networks0
FedDef: Defense Against Gradient Leakage in Federated Learning-based Network Intrusion Detection Systems0
Show:102550
← PrevPage 30 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified