SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 271280 of 492 papers

TitleStatusHype
SAFELOC: Overcoming Data Poisoning Attacks in Heterogeneous Federated Machine Learning for Indoor Localization0
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning0
Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification0
Securing Traffic Sign Recognition Systems in Autonomous Vehicles0
Security and Privacy Challenges in Deep Learning Models0
Security and Privacy Challenges of Large Language Models: A Survey0
Security Concerns for Large Language Models: A Survey0
Security of Distributed Machine Learning: A Game-Theoretic Approach to Design Secure DSVM0
SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks0
Self-Adaptive and Robust Federated Spectrum Sensing without Benign Majority for Cellular Networks0
Show:102550
← PrevPage 28 of 50Next →

No leaderboard results yet.