SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 1120 of 492 papers

TitleStatusHype
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Securing Traffic Sign Recognition Systems in Autonomous Vehicles0
VLMs Can Aggregate Scattered Training PatchesCode1
Adversarial Threat Vectors and Risk Mitigation for Retrieval-Augmented Generation Systems0
Cascading Adversarial Bias from Injection to Distillation in Language Models0
Distributed Federated Learning for Vehicular Network Security: Anomaly Detection Benefits and Multi-Domain Attack Threats0
Are Time-Series Foundation Models Deployment-Ready? A Systematic Study of Adversarial Robustness Across Domains0
Security Concerns for Large Language Models: A Survey0
Backdoors in DRL: Four Environments Focusing on In-distribution Triggers0
A Linear Approach to Data Poisoning0
Show:102550
← PrevPage 2 of 50Next →

No leaderboard results yet.