SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 201210 of 492 papers

TitleStatusHype
Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks0
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?0
FedDefender: Backdoor Attack Defense in Federated LearningCode1
On the Exploitability of Instruction TuningCode1
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
OVLA: Neural Network Ownership Verification using Latent Watermarks0
Data Poisoning to Fake a Nash Equilibrium in Markov Games0
FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users0
DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning DetectionCode1
Show:102550
← PrevPage 21 of 50Next →

No leaderboard results yet.