SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 361370 of 492 papers

TitleStatusHype
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions0
Attacks on the neural network and defense methods0
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning0
Backdoor Attack and Defense for Deep Regression0
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Certifiers Make Neural Networks Vulnerable to Availability Attacks0
Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
Backdoor Vulnerabilities in Normally Trained Deep Learning Models0
Show:102550
← PrevPage 37 of 50Next →

No leaderboard results yet.