SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 726750 of 1808 papers

TitleStatusHype
Evaluating Adversarial Robustness on Document Image Classification0
Adversarial-Aware Deep Learning System based on a Secondary Classical Machine Learning Verification Approach0
Evaluating Deep Learning Models and Adversarial Attacks on Accelerometer-Based Gesture Authentication0
Evaluating Neural Model Robustness for Machine Comprehension0
Attacking c-MARL More Effectively: A Data Driven Approach0
Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning0
Evaluating Similitude and Robustness of Deep Image Denoising Models via Adversarial Attack0
Democratic Training Against Universal Adversarial Perturbations0
Evaluating the Robustness of LiDAR Point Cloud Tracking Against Adversarial Attack0
Fortify Machine Learning Production Systems: Detect and Classify Adversarial Attacks0
Analyzing the Noise Robustness of Deep Neural Networks0
Delving into Data: Effectively Substitute Training for Black-box Attack0
Evaluation of Momentum Diverse Input Iterative Fast Gradient Sign Method (M-DI2-FGSM) Based Attack Method on MCS 2018 Adversarial Attacks on Black Box Face Recognition System0
A Context-Aware Approach for Textual Adversarial Attack through Probability Difference Guided Beam Search0
Analyzing Sentiment Polarity Reduction in News Presentation through Contextual Perturbation and Large Language Models0
Defensive Quantization: When Efficiency Meets Robustness0
EvolBA: Evolutionary Boundary Attack under Hard-label Black Box condition0
Adversarial Attack with Raindrops0
Forbidden Facts: An Investigation of Competing Objectives in Llama-20
Examining the Human Perceptibility of Black-Box Adversarial Attacks on Face Recognition0
Attacking Perceptual Similarity Metrics0
FRAUD-RLA: A new reinforcement learning adversarial attack against credit card fraud detection0
From Environmental Sound Representation to Robustness of 2D CNN Models Against Adversarial Attacks0
Gender Bias and Universal Substitution Adversarial Attacks on Grammatical Error Correction Systems for Automated Assessment0
Defense of Adversarial Ranking Attack in Text Retrieval: Benchmark and Baseline via Detection0
Show:102550
← PrevPage 30 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified