SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 11511200 of 1808 papers

TitleStatusHype
A Perceptual Distortion Reduction Framework: Towards Generating Adversarial Examples with High Perceptual Quality and Attack Success Rate0
GasHis-Transformer: A Multi-scale Visual Transformer Approach for Gastric Histopathological Image Detection0
AdvHaze: Adversarial Haze Attack0
Delving into Data: Effectively Substitute Training for Black-box Attack0
3D Adversarial Attacks Beyond Point CloudCode1
Influence Based Defense Against Data Poisoning Attacks in Online Learning0
Learning Transferable 3D Adversarial Cloaks for Deep Trained DetectorsCode0
Towards Adversarial Patch Analysis and Certified Defense against Crowd CountingCode0
Performance Evaluation of Adversarial Attacks: Discrepancies and Solutions0
Robust Certification for Laplace Learning on Geometric Graphs0
Staircase Sign Method for Boosting Adversarial AttacksCode1
Adversarial Diffusion Attacks on Graph-based Traffic Prediction ModelsCode0
Best Practices for Noise-Based Augmentation to Improve the Performance of Deployable Speech-Based Emotion Recognition Systems0
R&R: Metric-guided Adversarial Sentence GenerationCode1
Fashion-Guided Adversarial Attack on Person SegmentationCode0
Mitigating Adversarial Attack for Compute-in-Memory Accelerator Utilizing On-chip Finetune0
Distributed Estimation over Directed Graphs Resilient to Sensor Spoofing0
Improving Robustness of Deep Reinforcement Learning Agents: Environment Attack based on the Critic NetworkCode0
Semantically Stealthy Adversarial Attacks against Segmentation Models0
Evaluating Neural Model Robustness for Machine Comprehension0
Statistical inference for individual fairnessCode0
Robust Reinforcement Learning under model misspecificationCode0
IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object TrackingCode1
Adversarial Attacks on Deep Learning Based mmWave Beam Prediction in 5G and Beyond0
Vulnerability of Appearance-based Gaze Estimation0
Grey-box Adversarial Attack And Defence For Sentiment ClassificationCode0
TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing0
Self adversarial attack as an augmentation method for immunohistochemical stainings0
LSDAT: Low-Rank and Sparse Decomposition for Decision-based Adversarial Attack0
Boosting Adversarial Transferability through Enhanced Momentum0
SoK: A Modularized Approach to Study the Security of Automatic Speech Recognition SystemsCode0
KoDF: A Large-scale Korean DeepFake Detection Dataset0
Adversarial Attacks on Camera-LiDAR Models for 3D Car Detection0
Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic SegmentationCode1
Towards Robust Speech-to-Text Adversarial Attack0
Generating Unrestricted Adversarial Examples via Three Parameters0
Internal Wasserstein Distance for Adversarial Attack and Defense0
Stochastic-HMDs: Adversarial Resilient Hardware Malware Detectors through Voltage Over-scaling0
Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a BlinkCode1
Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Robustness VerificationCode1
Understanding the Robustness of Skeleton-based Action Recognition under Adversarial AttackCode1
Practical Relative Order Attack in Deep RankingCode0
BASAR:Black-box Attack on Skeletal Action RecognitionCode1
Stabilized Medical Image AttacksCode0
Universal Adversarial Perturbations and Image Spam Classifiers0
Towards Evaluating the Robustness of Deep Diagnostic Models by Adversarial AttackCode0
SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier DomainCode1
A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models0
Online Adversarial AttacksCode1
A Brief Survey on Deep Learning Based Data Hiding0
Show:102550
← PrevPage 24 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified