SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 271280 of 492 papers

TitleStatusHype
Detecting Backdoors in Deep Text Classifiers0
On Optimal Learning Under Targeted Data Poisoning0
Understanding Influence Functions and Datamodels via Harmonic Analysis0
Data Poisoning Attacks Against Multimodal EncodersCode1
Adversarial Robustness of Representation Learning for Knowledge GraphsCode1
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based ApproachCode0
Defend Data Poisoning Attacks on Voice Authentication0
FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning0
Do-AIQ: A Design-of-Experiment Approach to Quality Evaluation of AI Mislabel Detection Algorithm0
Label Flipping Data Poisoning Attack Against Wearable Human Activity Recognition System0
Show:102550
← PrevPage 28 of 50Next →

No leaderboard results yet.