SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 331340 of 492 papers

TitleStatusHype
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning0
WW-FL: Secure and Private Large-Scale Federated Learning0
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization0
If You Don't Understand It, Don't Use It: Eliminating Trojans with Filters Between Layers0
Imperceptible Rhythm Backdoor Attacks: Exploring Rhythm Transformation for Embedding Undetectable Vulnerabilities on Speech Recognition0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors0
Influence Based Defense Against Data Poisoning Attacks in Online Learning0
Influence Function based Data Poisoning Attacks to Top-N Recommender Systems0
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models0
Show:102550
← PrevPage 34 of 50Next →

No leaderboard results yet.