SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 125 of 1808 papers

TitleStatusHype
3DGAA: Realistic and Robust 3D Gaussian-based Adversarial Attack for Autonomous Driving0
VIP: Visual Information Protection through Adversarial Attacks on Vision-Language ModelsCode0
Identifying the Smallest Adversarial Load Perturbations that Render DC-OPF InfeasibleCode0
ScoreAdv: Score-based Targeted Generation of Natural Adversarial Examples via Diffusion ModelsCode1
3D Gaussian Splatting Driven Multi-View Robust Physical Adversarial Camouflage GenerationCode0
Robustness of Misinformation Classification Systems to Adversarial Examples Through BeamAttackCode0
Poster: Enhancing GNN Robustness for Network Intrusion Detection via Agent-based Analysis0
DRO-Augment Framework: Robustness by Synergizing Wasserstein Distributionally Robust Optimization and Data Augmentation0
Adversarial Attacks and Detection in Visual Place Recognition for Safer Robot NavigationCode1
Doppelganger Method: Breaking Role Consistency in LLM Agent via Prompt-based Transferable Adversarial Attack0
Constraint-Guided Prediction Refinement via Deterministic Diffusion Trajectories0
Alphabet Index Mapping: Jailbreaking LLMs through Semantic Dissimilarity0
Second Order State Hallucinations for Adversarial Attack Mitigation in Formation Control of Multi-Agent Systems0
On the existence of consistent adversarial attacks in high-dimensional linear classification0
Unsourced Adversarial CAPTCHA: A Bi-Phase Adversarial CAPTCHA Framework0
Boosting Adversarial Transferability for Hyperspectral Image Classification Using 3D Structure-invariant Transformation and Intermediate Feature Distance0
A look at adversarial attacks on radio waveforms from discrete latent space0
AdversariaL attacK sAfety aLIgnment(ALKALI): Safeguarding LLMs through GRACE: Geometric Representation-Aware Contrastive Enhancement- Introducing Adversarial Vulnerability Quality Index (AVQI)0
Enhancing Adversarial Robustness with Conformal Prediction: A Framework for Guaranteed Model ReliabilityCode0
Efficient Robust Conformal Prediction via Lipschitz-Bounded NetworksCode0
CAPAA: Classifier-Agnostic Projector-Based Adversarial AttackCode0
3D Gaussian Splat VulnerabilitiesCode1
Adversarial Threat Vectors and Risk Mitigation for Retrieval-Augmented Generation Systems0
Learning Safety Constraints for Large Language ModelsCode1
SafeScientist: Toward Risk-Aware Scientific Discoveries by LLM AgentsCode1
Show:102550
← PrevPage 1 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified