SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 11011150 of 1808 papers

TitleStatusHype
TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial AttackCode0
Survey on Federated Learning Threats: concepts, taxonomy on attacks and defences, experimental study and challenges0
Cheating Automatic Short Answer Grading: On the Adversarial Usage of Adjectives and AdverbsCode0
SSCAE: A Novel Semantic, Syntactic, and Context-Aware Natural Language Adversarial Example Generator0
Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework0
ALA: Naturalness-aware Adversarial Lightness Attack0
Phrase-level Textual Adversarial Attack with Label Preservation0
Residue-Based Natural Language Adversarial Attack Detection0
Evaluation of Four Black-box Adversarial Attacks and Some Query-efficient Improvement Analysis0
Adversarially Robust Classification by Conditional Generative Model Inversion0
Towards Adversarially Robust Deep Image Denoising0
Similarity-based Gray-box Adversarial Attack Against Deep Face RecognitionCode0
ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints0
Adversarial Attack via Dual-Stage Network ErosionCode0
Bounded Adversarial Attack on Deep Content FeaturesCode0
360-Attack: Distortion-Aware Perturbations From Perspective-Views0
A General Framework for Evaluating Robustness of Combinatorial Optimization Solvers on Graphs0
Adversarial Attack for Asynchronous Event-based Data0
Task and Model Agnostic Adversarial Attack on Graph Neural NetworksCode0
A Theoretical View of Linear Backpropagation and Its ConvergenceCode0
TASA: Twin Answer Sentences Attack for Adversarial Context Generation in Question Answering0
Reasoning Chain Based Adversarial Attack for Multi-hop Question Answering0
Dynamics-aware Adversarial Attack of 3D Sparse Convolution NetworkCode0
Towards Robust Neural Image Compression: Adversarial Attack and Model Finetuning0
NOMARO: Defending against Adversarial Attacks by NOMA-Inspired Reconstruction OperationCode0
MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare0
How Private Is Your RL Policy? An Inverse RL Based Analysis FrameworkCode0
Learning to Learn Transferable AttackCode0
Amicable Aid: Perturbing Images to Improve Classification Performance0
SNEAK: Synonymous Sentences-Aware Adversarial Attack on Natural Language Video Localization0
ML Attack Models: Adversarial Attacks and Data Poisoning Attacks0
Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial TextsCode0
Pyramid Adversarial Training Improves ViT PerformanceCode0
MedRDF: A Robust and Retrain-Less Diagnostic Framework for Medical Pretrained Models Against Adversarial Attack0
Adaptive Image Transformations for Transfer-based Adversarial AttackCode0
Adaptive Perturbation for Adversarial Attack0
Natural & Adversarial Bokeh Rendering via Circle-of-Confusion Predictive Network0
Thundernna: a white box adversarial attack0
Heterogeneous Architecture Search Approach within Adversarial Dynamic Defense Framework0
Metamorphic Adversarial Detection Pipeline for Face Recognition Systems0
A Practical and Stealthy Adversarial Attack for Cyber-Physical Applications0
Enhanced countering adversarial attacks via input denoising and feature restoringCode0
Fooling Adversarial Training with Inducing Noise0
Generating Unrestricted 3D Adversarial Point CloudsCode0
Self-Supervised Contrastive Learning with Adversarial Perturbations for Robust Pretrained Language Models0
Robust and Effective Grammatical Error Correction with Simple Cycle Self-Augmenting0
Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense0
BufferSearch: Generating Black-Box Adversarial Texts With Lower Queries0
Improving the robustness and accuracy of biomedical language models through adversarial trainingCode0
Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks0
Show:102550
← PrevPage 23 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified