SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 426450 of 492 papers

TitleStatusHype
Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks against RAG-based Inference in Scale and Severity Using JailbreakingCode0
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly DetectionCode0
Detecting AI Trojans Using Meta Neural AnalysisCode0
Poison-RAG: Adversarial Data Poisoning Attacks on Retrieval-Augmented Generation in Recommender SystemsCode0
Robust Yet Efficient Conformal Prediction SetsCode0
PORE: Provably Robust Recommender Systems against Data Poisoning AttacksCode0
Fooling Partial Dependence via Data PoisoningCode0
Keeping up with dynamic attackers: Certifying robustness to adaptive online data poisoningCode0
Run-Off Election: Improved Provable Defense against Data Poisoning AttacksCode0
Trainwreck: A damaging adversarial attack on image classifiersCode0
Transferable Availability Poisoning AttacksCode0
CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data PoisoningCode0
Targeted Backdoor Attacks on Deep Learning Systems Using Data PoisoningCode0
Learning from Convolution-based Unlearnable DatasetsCode0
Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer NetworksCode0
Securing Multi-turn Conversational Language Models From Distributed Backdoor TriggersCode0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
Lethal Dose Conjecture on Data PoisoningCode0
Lethean Attack: An Online Data Poisoning TechniqueCode0
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe NoiseCode0
Machine Learning Security against Data Poisoning: Are We There Yet?Code0
Machine Unlearning Fails to Remove Data Poisoning AttacksCode0
Attacking Black-box Recommendations via Copying Cross-domain User ProfilesCode0
Progressive Poisoned Data Isolation for Training-time Backdoor DefenseCode0
Explaining Vulnerabilities to Adversarial Machine Learning through Visual AnalyticsCode0
Show:102550
← PrevPage 18 of 20Next →

No leaderboard results yet.