SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 150 of 492 papers

TitleStatusHype
A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future TrendsCode4
Safety at Scale: A Comprehensive Survey of Large Model SafetyCode3
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoningCode3
BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language ModelsCode3
Data Poisoning in LLMs: Jailbreak-Tuning and Scaling LawsCode3
SoK: Benchmarking Poisoning Attacks and Defenses in Federated LearningCode2
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language ModelsCode2
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based AgentsCode2
Backdoor Learning: A SurveyCode2
Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example AttacksCode1
Availability Attacks Create ShortcutsCode1
Data Poisoning in Deep Learning: A SurveyCode1
Intrinsic Certified Robustness of Bagging against Data Poisoning AttacksCode1
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning AttacksCode1
Indiscriminate Poisoning Attacks on Unsupervised Contrastive LearningCode1
Learning to Poison Large Language Models for Downstream ManipulationCode1
How To Backdoor Federated LearningCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning AttacksCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning AttacksCode1
Data Poisoning Attacks Against Federated Learning SystemsCode1
Data Poisoning Attacks Against Multimodal EncodersCode1
Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Data Poisoning Attacks on Regression Learning and Corresponding DefensesCode1
Data Poisoning Won't Save You From Facial RecognitionCode1
Adversarial Robustness of Representation Learning for Knowledge GraphsCode1
Defending Against Patch-based Backdoor Attacks on Self-Supervised LearningCode1
DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning DetectionCode1
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?Code1
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
Backdoor Attacks on Crowd CountingCode1
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
FedDefender: Backdoor Attack Defense in Federated LearningCode1
Autoregressive Perturbations for Data PoisoningCode1
Amplifying Membership Exposure via Data PoisoningCode1
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data AugmentationsCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-startCode1
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
FR-Train: A Mutual Information-Based Approach to Fair and Robust TrainingCode1
Generative Poisoning Using Random DiscriminatorsCode1
Adversarial Examples Make Strong PoisonsCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
Show:102550
← PrevPage 1 of 10Next →

No leaderboard results yet.