SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 76100 of 1808 papers

TitleStatusHype
Adversarial Attack and Defense in Deep RankingCode1
Adversarial Attack and Defense of Structured Prediction ModelsCode1
Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving ScenariosCode1
Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality MetricsCode1
Adversarial Training for Free!Code1
Adversarial Attack and Defense Strategies for Deep Speaker Recognition SystemsCode1
An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial TransferabilityCode1
3D Gaussian Splat VulnerabilitiesCode1
An Extensive Study on Adversarial Attack against Pre-trained Models of CodeCode1
Ad2Attack: Adaptive Adversarial Attack on Real-Time UAV TrackingCode1
An Orthogonal Classifier for Improving the Adversarial Robustness of Neural NetworksCode1
Adversarial Attacks against Windows PE Malware Detection: A Survey of the State-of-the-ArtCode1
Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and BeyondCode1
Are AlphaZero-like Agents Robust to Adversarial Perturbations?Code1
A Review of Adversarial Attack and Defense for Classification MethodsCode1
Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query AttacksCode1
Adversarial Attack on Community Detection by Hiding IndividualsCode1
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNsCode1
Adversarial Attack on Deep Learning-Based Splice LocalizationCode1
Data-free Universal Adversarial Perturbation with Pseudo-semantic PriorCode1
3D Adversarial Attacks Beyond Point CloudCode1
Adversarial Attack on Graph Neural Networks as An Influence Maximization ProblemCode1
AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language ModelsCode1
CausalAdv: Adversarial Robustness through the Lens of CausalityCode1
Adversarial Vulnerabilities in Large Language Models for Time Series ForecastingCode1
Show:102550
← PrevPage 4 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified