SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 251275 of 492 papers

TitleStatusHype
Regularized Robustly Reliable Learners and Instance Targeted Attacks0
Reinforcement Learning For Data Poisoning on Graph Neural Networks0
Releasing Malevolence from Benevolence: The Menace of Benign Data on Machine Unlearning0
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization0
Reputation-Based Federated Learning Defense to Mitigate Threats in EEG Signal Classification0
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning0
Revamping Federated Learning Security from a Defender's Perspective: A Unified Defense with Homomorphic Encrypted Data Space0
Detection of Backdoors in Trained Classifiers Without Access to the Training Set0
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic0
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing0
Review-Incorporated Model-Agnostic Profile Injection Attacks on Recommender Systems0
Robust Federated Training via Collaborative Machine Teaching using Trusted Instances0
Robust learning under clean-label attack0
Robustly-reliable learners under poisoning attacks0
Robust Variational Autoencoder for Tabular Data with Beta Divergence0
SAFELOC: Overcoming Data Poisoning Attacks in Heterogeneous Federated Machine Learning for Indoor Localization0
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning0
Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification0
Securing Traffic Sign Recognition Systems in Autonomous Vehicles0
Security and Privacy Challenges in Deep Learning Models0
Security and Privacy Challenges of Large Language Models: A Survey0
Security Concerns for Large Language Models: A Survey0
Security of Distributed Machine Learning: A Game-Theoretic Approach to Design Secure DSVM0
SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks0
Self-Adaptive and Robust Federated Spectrum Sensing without Benign Majority for Cellular Networks0
Show:102550
← PrevPage 11 of 20Next →

No leaderboard results yet.