SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 76100 of 492 papers

TitleStatusHype
Reclaiming "Open AI" -- AI Model Serving Can Be Open Access, Yet Monetizable and Loyal0
Learning and Unlearning of Fabricated Knowledge in Language Models0
Inverting Gradient Attacks Makes Powerful Data Poisoning0
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions0
Regularized Robustly Reliable Learners and Instance Targeted Attacks0
Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning0
PoisonBench: Assessing Large Language Model Vulnerability to Data PoisoningCode1
Fragile Giants: Understanding the Susceptibility of Models to Subpopulation Attacks0
Data Taggants: Dataset Ownership Verification via Harmless Targeted Data Poisoning0
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning0
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective0
Survey of Security and Data Attacks on Machine Unlearning In Financial and E-Commerce0
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
UTrace: Poisoning Forensics for Private Collaborative Learning0
SHFL: Secure Hierarchical Federated Learning Framework for Edge Networks0
Clean Label Attacks against SLU Systems0
Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks against RAG-based Inference in Scale and Severity Using JailbreakingCode0
Context is the Key: Backdoor Attacks for In-Context Learning with Vision Transformers0
Blockchain-based Federated Recommendation with Incentive Mechanism0
Protecting against simultaneous data poisoning attacks0
BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language ModelsCode3
Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender SystemsCode0
Unlearnable Examples Detection via Iterative Filtering0
Sonic: Fast and Transferable Data Poisoning on Clustering Algorithms0
2D-OOB: Attributing Data Contribution Through Joint Valuation FrameworkCode0
Show:102550
← PrevPage 4 of 20Next →

No leaderboard results yet.