SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 5175 of 492 papers

TitleStatusHype
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model DynamicsCode1
Radioactive data: tracing through trainingCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
Adversarial Robustness of Representation Learning for Knowledge GraphsCode1
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning AttacksCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Backdoor Attacks on Crowd CountingCode1
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
How To Backdoor Federated LearningCode1
Intrinsic Certified Robustness of Bagging against Data Poisoning AttacksCode1
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
FR-Train: A Mutual Information-Based Approach to Fair and Robust TrainingCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Data Poisoning Attacks Against Federated Learning SystemsCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
PoisonBench: Assessing Large Language Model Vulnerability to Data PoisoningCode1
Show:102550
← PrevPage 3 of 20Next →

No leaderboard results yet.