SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 7180 of 492 papers

TitleStatusHype
DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning DetectionCode1
Defending Against Patch-based Backdoor Attacks on Self-Supervised LearningCode1
Dynamic Defense Against Byzantine Poisoning Attacks in Federated LearningCode1
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning AttacksCode1
Penalty Method for Inversion-Free Deep Bilevel OptimizationCode1
Witches' Brew: Industrial Scale Data Poisoning via Gradient MatchingCode1
Show:102550
← PrevPage 8 of 50Next →

No leaderboard results yet.