SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 301350 of 1808 papers

TitleStatusHype
Hiding Faces in Plain Sight: Defending DeepFakes by Disrupting Face DetectionCode1
HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable DesignCode1
R&R: Metric-guided Adversarial Sentence GenerationCode1
Improve robustness of DNN for ECG signal classification:a noise-to-signal ratio perspectiveCode1
An Analysis of Recent Advances in Deepfake Image Detection in an Evolving Threat LandscapeCode1
An Efficient Adversarial Attack for Tree EnsemblesCode1
Adversarial Attack and Defense in Deep RankingCode1
Interpolation between Residual and Non-Residual NetworksCode1
Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural PhenomenonCode1
Iron Sharpens Iron: Defending Against Attacks in Machine-Generated Text Detection with Adversarial TrainingCode1
Adversarial Attack and Defense of Structured Prediction ModelsCode1
LGV: Boosting Adversarial Example Transferability from Large Geometric VicinityCode1
Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object DetectionCode1
An Extensive Study on Adversarial Attack against Pre-trained Models of CodeCode1
Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving ScenariosCode1
Attacking Video Recognition Models with Bullet-Screen CommentsCode1
An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacksCode1
Meta Gradient Adversarial AttackCode1
Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality MetricsCode1
Multi-granularity Textual Adversarial Attack with Behavior CloningCode1
A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment0
A Formalization of Robustness for Deep Neural Networks0
Adversarial Attacks on AI-Generated Text Detection Models: A Token Probability-Based Approach Using Embeddings0
Affine Disentangled GAN for Interpretable and Robust AV Perception0
AEMIM: Adversarial Examples Meet Masked Image Modeling0
Adversarial Attacks Neutralization via Data Set Randomization0
AdvCodeMix: Adversarial Attack on Code-Mixed Data0
AED-PADA:Improving Generalizability of Adversarial Example Detection via Principal Adversarial Domain Adaptation0
AdvSwap: Covert Adversarial Perturbation with High Frequency Info-swapping for Autonomous Driving Perception0
Adversarial Attacks in Sound Event Classification0
AdvSmo: Black-box Adversarial Attack by Smoothing Linear Structure of Texture0
Patch Synthesis for Property Repair of Deep Neural Networks0
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey0
Adversarial Attack for Asynchronous Event-based Data0
AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision Systems0
Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers0
Natural & Adversarial Bokeh Rendering via Circle-of-Confusion Predictive Network0
AdvMask: A Sparse Adversarial Attack Based Data Augmentation Method for Image Classification0
Adversarial Attacks for Multi-view Deep Models0
Best Practices for Noise-Based Augmentation to Improve the Performance of Deployable Speech-Based Emotion Recognition Systems0
AdvHaze: Adversarial Haze Attack0
Adversarial Attacks and Mitigation for Anomaly Detectors of Cyber-Physical Systems0
Unsourced Adversarial CAPTCHA: A Bi-Phase Adversarial CAPTCHA Framework0
Adversarial Attacks and Dimensionality in Text Classifiers0
AdvGen: Physical Adversarial Attack on Face Presentation Attack Detection Systems0
Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition0
Beyond Classification: Evaluating Diffusion Denoised Smoothing for Security-Utility Trade off0
AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning0
Adverseness vs. Equilibrium: Exploring Graph Adversarial Resilience through Dynamic Equilibrium0
Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks0
Show:102550
← PrevPage 7 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified