SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 12511300 of 1808 papers

TitleStatusHype
Adversarial Examples for Model-Based Control: A Sensitivity Analysis0
PINCH: An Adversarial Extraction Attack Framework for Deep Learning Models0
Pixab-CAM: Attend Pixel, not Channel0
Pixel is All You Need: Adversarial Trajectory-Ensemble Active Learning for Salient Object Detection0
PlugAT: A Plug and Play Module to Defend against Textual Adversarial Attack0
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm0
Towards Transferable Adversarial Attack against Deep Face Recognition0
Point Adversarial Self Mining: A Simple Method for Facial Expression Recognition0
PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples0
Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds0
Adversarial Example Detection Using Latent Neighborhood Graph0
Polishing Decision-Based Adversarial Noise With a Customized Sampling0
Poster: Enhancing GNN Robustness for Network Intrusion Detection via Agent-based Analysis0
Potential adversarial samples for white-box attacks0
Rethinking Impersonation and Dodging Attacks on Face Recognition Systems0
Seeing isn't Believing: Practical Adversarial Attack Against Object Detectors0
Practical Fast Gradient Sign Attack against Mammographic Image Classifier0
Practical Order Attack in Deep Ranking0
Towards Transferable Adversarial Attacks with Centralized Perturbation0
PRAT: PRofiling Adversarial aTtacks0
Prepared for the Worst: A Learning-Based Adversarial Attack for Resilience Analysis of the ICP Algorithm0
Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning0
Preventing Non-intrusive Load Monitoring Privacy Invasion: A Precise Adversarial Attack Scheme for Networked Smart Meters0
Adversarial Embedding: A robust and elusive Steganography and Watermarking technique0
Prior Networks for Detection of Adversarial Attacks0
Privacy Protection in Personalized Diffusion Models via Targeted Cross-Attention Adversarial Attack0
Real-Time Privacy Risk Measurement with Privacy Tokens for Gradient Leakage0
Probabilistic Categorical Adversarial Attack & Adversarial Training0
Probabilistic Modeling of Deep Features for Out-of-Distribution and Adversarial Detection0
Adaptive Perturbation for Adversarial Attack0
Probing Model Signal-Awareness via Prediction-Preserving Input Minimization0
Probing the Robustness of Vision-Language Pretrained Models: A Multimodal Adversarial Attack Approach0
Wavelet-Based Image Tokenizer for Vision Transformers0
ProjAttacker: A Configurable Physical Adversarial Attack for Face Recognition via Projector0
Prompt2Perturb (P2P): Text-Guided Diffusion-Based Adversarial Attack on Breast Ultrasound Images0
Prompt-driven Transferable Adversarial Attack on Person Re-Identification with Attribute-aware Textual Inversion0
Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation0
PROSAC: Provably Safe Certification for Machine Learning Models under Adversarial Attacks0
Protection against Cloning for Deep Learning0
Protego: Detecting Adversarial Examples for Vision Transformers via Intrinsic Capabilities0
Protein Folding Neural Networks Are Not Robust0
Adaptive Meta-learning-based Adversarial Training for Robust Automatic Modulation Classification0
Adversarial Eigen Attack on Black-Box Models0
Adversarial defenses via a mixture of generators0
Adversarial Defense based on Structure-to-Signal Autoencoders0
Pseudo-Conversation Injection for LLM Goal Hijacking0
Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework for Refining Arbitrary Dense Adversarial Attacks0
Q-FAKER: Query-free Hard Black-box Attack via Controlled Generation0
QFAL: Quantum Federated Adversarial Learning0
Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object Detection Models0
Show:102550
← PrevPage 26 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified