SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 17011725 of 1808 papers

TitleStatusHype
Availability Adversarial Attack and Countermeasures for Deep Learning-based Load ForecastingCode0
Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep LearningCode0
DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms in Vision TransformersCode0
Adversarial Attacks on Parts of Speech: An Empirical Study in Text-to-Image GenerationCode0
Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSMCode0
Targeted Mismatch Adversarial Attack: Query with a Flower to Retrieve the TowerCode0
Delving into Transferable Adversarial Examples and Black-box AttacksCode0
A Uniform Framework for Anomaly Detection in Deep Neural NetworksCode0
Robust Reinforcement Learning under model misspecificationCode0
Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion CriteriaCode0
Deflecting Adversarial Attacks with Pixel DeflectionCode0
Multi-Granularity Tibetan Textual Adversarial Attack Method Based on Masked Language ModelCode0
DANCE: Enhancing saliency maps using decoysCode0
Multi-Instance Adversarial Attack on GNN-Based Malicious Domain DetectionCode0
Towards Transferable Targeted Adversarial ExamplesCode0
TASA: Deceiving Question Answering Models by Twin Answer Sentences AttackCode0
Adversarial Attacks on Large Language Models Using Regularized RelaxationCode0
Defense-friendly Images in Adversarial Attacks: Dataset and Metrics for Perturbation DifficultyCode0
Task and Model Agnostic Adversarial Attack on Graph Neural NetworksCode0
T-BFA: Targeted Bit-Flip Adversarial Weight AttackCode0
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural NetworksCode0
Defense against Adversarial Attacks Using High-Level Representation Guided DenoiserCode0
Defending Substitution-Based Profile Pollution Attacks on Sequential RecommendersCode0
Natural Language Adversarial Defense through Synonym EncodingCode0
Role of Spatial Context in Adversarial Robustness for Object DetectionCode0
Show:102550
← PrevPage 69 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet20Test Accuracy89.9589.95(1)Community Verified
2Xu et al.Attack: PGD2078.68Unverified
33-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
4TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
5AdvTraining [madry2018]Attack: PGD2048.44Unverified
6TRADES [zhang2019b]Attack: PGD2045.9Unverified
7XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified