SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 271280 of 1808 papers

TitleStatusHype
Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution LearningCode1
Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and BeyondCode1
epsilon-Mesh Attack: A Surface-based Adversarial Point Cloud Attack for Facial Expression RecognitionCode1
Fluent dreaming for language modelsCode1
A Review of Adversarial Attack and Defense for Classification MethodsCode1
Are AlphaZero-like Agents Robust to Adversarial Perturbations?Code1
Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object TrackingCode1
Frequency Domain Adversarial Training for Robust Volumetric Medical SegmentationCode1
Attack as the Best Defense: Nullifying Image-to-image Translation GANs via Limit-aware Adversarial AttackCode1
Guided Adversarial Attack for Evaluating and Enhancing Adversarial DefensesCode1
Show:102550
← PrevPage 28 of 181Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified