SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 501525 of 1808 papers

TitleStatusHype
PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection0
RAIN: Your Language Models Can Align Themselves without FinetuningCode1
Outlier Robust Adversarial TrainingCode0
Certifying LLM Safety against Adversarial PromptingCode1
Adaptive Adversarial Training Does Not Increase Recourse Costs0
MathAttack: Attacking Large Language Models Towards Math Solving Ability0
Improving Visual Quality and Transferability of Adversarial Attacks on Face Recognition Simultaneously with Adversarial Restoration0
Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified Models0
The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement LearningCode0
Can We Rely on AI?0
A Classification-Guided Approach for Adversarial Attacks against Neural Machine TranslationCode0
Imperceptible Adversarial Attack on Deep Neural Networks from Image Boundary0
On-Manifold Projected Gradient Descent0
PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model ModificationCode1
Multi-Instance Adversarial Attack on GNN-Based Malicious Domain DetectionCode0
Spear and Shield: Adversarial Attacks and Defense Methods for Model-Based Link Prediction on Continuous-Time Dynamic GraphsCode0
Enhancing Adversarial Attacks: The Similar Target MethodCode0
On the Adversarial Robustness of Multi-Modal Foundation ModelsCode1
Hiding Backdoors within Event Sequence Data via Poisoning Attacks0
Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method0
AIR: Threats of Adversarial Attacks on Deep Learning-Based Information Recovery0
A White-Box False Positive Adversarial Attack Method on Contrastive Loss Based Offline Handwritten Signature Verification ModelsCode0
Simple and Efficient Partial Graph Adversarial Attack: A New PerspectiveCode0
Not So Robust After All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks0
Physical Adversarial Attacks For Camera-based Smart Systems: Current Trends, Categorization, Applications, Research Challenges, and Future Outlook0
Show:102550
← PrevPage 21 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified