SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 211220 of 492 papers

TitleStatusHype
DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning DetectionCode1
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Differentially-Private Decision Trees and Provable Robustness to Data PoisoningCode0
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate ModelsCode0
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoningCode3
FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation0
Evaluating Impact of User-Cluster Targeted Attacks in Matrix Factorisation Recommenders0
Show:102550
← PrevPage 22 of 50Next →

No leaderboard results yet.