SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 801810 of 1808 papers

TitleStatusHype
Defense Against Explanation Manipulation0
FlippedRAG: Black-Box Opinion Manipulation Adversarial Attacks to Retrieval-Augmented Generation Models0
Defense against Adversarial Cloud Attack on Remote Sensing Salient Object Detection0
Adversarial Music: Real World Audio Adversary Against Wake-word Detection System0
Analysis of the vulnerability of machine learning regression models to adversarial attacks using data from 5G wireless networks0
An AI-Enabled Framework to Defend Ingenious MDT-based Attacks on the Emerging Zero Touch Cellular Networks0
Adversarial Attack and Defense for LoRa Device Identification and Authentication via Deep Learning0
Fooling Adversarial Training with Inducing Noise0
Fooling Adversarial Training with Induction Noise0
GradMDM: Adversarial Attack on Dynamic Networks0
Show:102550
← PrevPage 81 of 181Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified