SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 10511075 of 1808 papers

TitleStatusHype
Local Competition and Uncertainty for Adversarial Robustness in Deep Learning0
AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision Systems0
Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification0
LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model0
VQUNet: Vector Quantization U-Net for Defending Adversarial Atacks by Regularizing Unwanted Noise0
Natural & Adversarial Bokeh Rendering via Circle-of-Confusion Predictive Network0
Towards Adversarially Robust Deep Image Denoising0
Vulnerabilities in AI-generated Image Detection: The Challenge of Adversarial Attacks0
Looking From the Future: Multi-order Iterations Can Enhance Adversarial Attack Transferability0
Improving VAEs' Robustness to Adversarial Attack0
L_p-norm Distortion-Efficient Adversarial Attack0
L-RED: Efficient Post-Training Detection of Imperceptible Backdoor Attacks without Access to the Training Set0
LSDAT: Low-Rank and Sparse Decomposition for Decision-based Adversarial Attack0
MAA: Meticulous Adversarial Attack against Vision-Language Pre-trained Models0
Make the Most of Everything: Further Considerations on Disrupting Diffusion-based Customization0
AdvMask: A Sparse Adversarial Attack Based Data Augmentation Method for Image Classification0
AdvHaze: Adversarial Haze Attack0
Vulnerability Analysis of Transformer-based Optical Character Recognition to Adversarial Attacks0
MARAGE: Transferable Multi-Model Adversarial Attack for Retrieval-Augmented Generation Data Extraction0
Massif: Interactive Interpretation of Adversarial Attacks on Deep Learning0
MathAttack: Attacking Large Language Models Towards Math Solving Ability0
Maximal Jacobian-based Saliency Map Attack0
AdvGen: Physical Adversarial Attack on Face Presentation Attack Detection Systems0
MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare0
MedRDF: A Robust and Retrain-Less Diagnostic Framework for Medical Pretrained Models Against Adversarial Attack0
Show:102550
← PrevPage 43 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified