SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 421430 of 492 papers

TitleStatusHype
Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender SystemsCode0
Poisoning Attacks with Generative Adversarial NetsCode0
Defending Against Disinformation Attacks in Open-Domain Question AnsweringCode0
Differentially-Private Decision Trees and Provable Robustness to Data PoisoningCode0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks against RAG-based Inference in Scale and Severity Using JailbreakingCode0
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly DetectionCode0
Detecting AI Trojans Using Meta Neural AnalysisCode0
Poison-RAG: Adversarial Data Poisoning Attacks on Retrieval-Augmented Generation in Recommender SystemsCode0
Robust Yet Efficient Conformal Prediction SetsCode0
Show:102550
← PrevPage 43 of 50Next →

No leaderboard results yet.