SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 125 of 492 papers

TitleStatusHype
A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future TrendsCode4
BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language ModelsCode3
Safety at Scale: A Comprehensive Survey of Large Model SafetyCode3
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoningCode3
Data Poisoning in LLMs: Jailbreak-Tuning and Scaling LawsCode3
Backdoor Learning: A SurveyCode2
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based AgentsCode2
SoK: Benchmarking Poisoning Attacks and Defenses in Federated LearningCode2
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language ModelsCode2
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
Backdoor Attacks on Crowd CountingCode1
Backdoor Attack on Hash-based Image Retrieval via Clean-label Data PoisoningCode1
Data Poisoning Attacks Against Federated Learning SystemsCode1
Bilevel Optimization with a Lower-level Contraction: Optimal Sample Complexity without Warm-startCode1
Autoregressive Perturbations for Data PoisoningCode1
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
Adversarial Robustness of Representation Learning for Knowledge GraphsCode1
Amplifying Membership Exposure via Data PoisoningCode1
Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution MethodsCode1
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
Adversarial Examples Make Strong PoisonsCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
Show:102550
← PrevPage 1 of 20Next →

No leaderboard results yet.