SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 7180 of 492 papers

TitleStatusHype
Delta-Influence: Unlearning Poisons via Influence FunctionsCode0
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization0
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
SAFELOC: Overcoming Data Poisoning Attacks in Heterogeneous Federated Machine Learning for Indoor Localization0
Learning from Convolution-based Unlearnable DatasetsCode0
Reclaiming "Open AI" -- AI Model Serving Can Be Open Access, Yet Monetizable and Loyal0
Learning and Unlearning of Fabricated Knowledge in Language Models0
Inverting Gradient Attacks Makes Powerful Data Poisoning0
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions0
Regularized Robustly Reliable Learners and Instance Targeted Attacks0
Show:102550
← PrevPage 8 of 50Next →

No leaderboard results yet.