SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 111120 of 492 papers

TitleStatusHype
Robust Yet Efficient Conformal Prediction SetsCode0
Advancements in Recommender Systems: A Comprehensive Analysis Based on Data, Algorithms, and Evaluation0
A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future TrendsCode4
Neuromimetic metaplasticity for adaptive continual learning0
If You Don't Understand It, Don't Use It: Eliminating Trojans with Filters Between Layers0
Releasing Malevolence from Benevolence: The Menace of Benign Data on Machine Unlearning0
Securing Multi-turn Conversational Language Models From Distributed Backdoor TriggersCode0
On the Robustness of Graph Reduction Against GNN Backdoor0
Machine Unlearning Fails to Remove Data Poisoning AttacksCode0
BadSampler: Harnessing the Power of Catastrophic Forgetting to Poison Byzantine-robust Federated Learning0
Show:102550
← PrevPage 12 of 50Next →

No leaderboard results yet.