SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 151175 of 492 papers

TitleStatusHype
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors0
Purifying Large Language Models by Ensembling a Small Language Model0
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based AgentsCode2
SusFL: Energy-Aware Federated Learning-based Monitoring for Sustainable Smart Farms0
Review-Incorporated Model-Agnostic Profile Injection Attacks on Recommender Systems0
The Effect of Data Poisoning on Counterfactual ExplanationsCode0
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language ModelsCode2
Game-Theoretic Unlearnable Example GeneratorCode0
Security and Privacy Challenges of Large Language Models: A Survey0
Federated Learning with Dual Attention for Robust Modulation Classification under Attacks0
A GAN-based data poisoning framework against anomaly detection in vertical federated learning0
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline0
Data-Dependent Stability Analysis of Adversarial Training0
Revamping Federated Learning Security from a Defender's Perspective: A Unified Defense with Homomorphic Encrypted Data Space0
Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection0
Adversarial Data Poisoning for Fake News Detection: How to Make a Model Misclassify a Target News without Modifying It0
Balancing Privacy, Robustness, and Efficiency in Machine Learning0
Progressive Poisoned Data Isolation for Training-time Backdoor DefenseCode0
TrojFSP: Trojan Insertion in Few-shot Prompt Tuning0
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey0
Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks0
FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against Adversarial Attacks0
Mendata: A Framework to Purify Manipulated Training Data0
Show:102550
← PrevPage 7 of 20Next →

No leaderboard results yet.