SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 276300 of 1808 papers

TitleStatusHype
Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular DataCode1
Controlling Whisper: Universal Acoustic Adversarial Attacks to Control Speech Foundation ModelsCode1
Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible NoisesCode1
A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical FlowCode1
Defending and Harnessing the Bit-Flip Based Adversarial Weight AttackCode1
Defending Your Voice: Adversarial Attack on Voice ConversionCode1
Defensive Distillation based Adversarial Attacks Mitigation Method for Channel Estimation using Deep Learning Models in Next-Generation Wireless NetworksCode1
Adversarial Attacks on ML Defense Models CompetitionCode1
Differentiable JPEG: The Devil is in the DetailsCode1
Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency DomainCode1
AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language ModelsCode1
BASAR:Black-box Attack on Skeletal Action RecognitionCode1
Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic SegmentationCode1
DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural NetworksCode1
An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial TransferabilityCode1
An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacksCode1
Certifying LLM Safety against Adversarial PromptingCode1
Ensemble everything everywhere: Multi-scale aggregation for adversarial robustnessCode1
epsilon-Mesh Attack: A Surface-based Adversarial Point Cloud Attack for Facial Expression RecognitionCode1
Fooling the Image Dehazing Models by First Order GradientCode1
Fast and Low-Cost Genomic Foundation Models via Outlier RemovalCode1
Disrupting Diffusion: Token-Level Attention Erasure Attack against Diffusion-based CustomizationCode1
FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial AttackCode1
Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point CloudsCode1
On the Adversarial Robustness of Camera-based 3D Object DetectionCode1
Show:102550
← PrevPage 12 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified