SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 181190 of 492 papers

TitleStatusHype
Defending Against Repetitive Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-offCode0
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning0
Robust Yet Efficient Conformal Prediction SetsCode0
Advancements in Recommender Systems: A Comprehensive Analysis Based on Data, Algorithms, and Evaluation0
Neuromimetic metaplasticity for adaptive continual learning0
If You Don't Understand It, Don't Use It: Eliminating Trojans with Filters Between Layers0
Releasing Malevolence from Benevolence: The Menace of Benign Data on Machine Unlearning0
Securing Multi-turn Conversational Language Models From Distributed Backdoor TriggersCode0
On the Robustness of Graph Reduction Against GNN Backdoor0
Machine Unlearning Fails to Remove Data Poisoning AttacksCode0
Show:102550
← PrevPage 19 of 50Next →

No leaderboard results yet.