SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 371380 of 492 papers

TitleStatusHype
Data Poisoning to Fake a Nash Equilibrium in Markov Games0
Online Data Poisoning Attack0
Online Data Poisoning Attacks0
On Optimal Learning Under Targeted Data Poisoning0
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks0
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning0
On the Effectiveness of Poisoning against Unsupervised Domain Adaptation0
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models0
On the Relevance of Byzantine Robust Optimization Against Data Poisoning0
On the Robustness of Graph Reduction Against GNN Backdoor0
Show:102550
← PrevPage 38 of 50Next →

No leaderboard results yet.