SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 151200 of 492 papers

TitleStatusHype
Reclaiming "Open AI" -- AI Model Serving Can Be Open Access, Yet Monetizable and Loyal0
Learning and Unlearning of Fabricated Knowledge in Language Models0
Inverting Gradient Attacks Makes Powerful Data Poisoning0
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions0
Regularized Robustly Reliable Learners and Instance Targeted Attacks0
Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning0
Fragile Giants: Understanding the Susceptibility of Models to Subpopulation Attacks0
Data Taggants: Dataset Ownership Verification via Harmless Targeted Data Poisoning0
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning0
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective0
Survey of Security and Data Attacks on Machine Unlearning In Financial and E-Commerce0
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
SHFL: Secure Hierarchical Federated Learning Framework for Edge Networks0
UTrace: Poisoning Forensics for Private Collaborative Learning0
Clean Label Attacks against SLU Systems0
Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks against RAG-based Inference in Scale and Severity Using JailbreakingCode0
Context is the Key: Backdoor Attacks for In-Context Learning with Vision Transformers0
Blockchain-based Federated Recommendation with Incentive Mechanism0
Protecting against simultaneous data poisoning attacks0
Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender SystemsCode0
Unlearnable Examples Detection via Iterative Filtering0
Sonic: Fast and Transferable Data Poisoning on Clustering Algorithms0
2D-OOB: Attributing Data Contribution Through Joint Valuation FrameworkCode0
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense0
Model Hijacking Attack in Federated Learning0
Blockchain for Large Language Model Security and Safety: A Holistic Survey0
Trading Devil Final: Backdoor attack via Stock market and Bayesian Optimization0
Data Poisoning: An Overlooked Threat to Power Grid Resilience0
Turning Generative Models Degenerate: The Power of Data Poisoning Attacks0
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor AttacksCode0
Defending Against Repetitive Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-offCode0
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning0
Robust Yet Efficient Conformal Prediction SetsCode0
Advancements in Recommender Systems: A Comprehensive Analysis Based on Data, Algorithms, and Evaluation0
Neuromimetic metaplasticity for adaptive continual learning0
If You Don't Understand It, Don't Use It: Eliminating Trojans with Filters Between Layers0
Releasing Malevolence from Benevolence: The Menace of Benign Data on Machine Unlearning0
Securing Multi-turn Conversational Language Models From Distributed Backdoor TriggersCode0
On the Robustness of Graph Reduction Against GNN Backdoor0
Machine Unlearning Fails to Remove Data Poisoning AttacksCode0
BadSampler: Harnessing the Power of Catastrophic Forgetting to Poison Byzantine-robust Federated Learning0
FullCert: Deterministic End-to-End Certification for Training and Inference of Neural NetworksCode0
Imperceptible Rhythm Backdoor Attacks: Exploring Rhythm Transformation for Embedding Undetectable Vulnerabilities on Speech Recognition0
A Study of Backdoors in Instruction Fine-tuned Language Models0
Certified Robustness to Data Poisoning in Gradient-Based TrainingCode0
Generalization Bound and New Algorithm for Clean-Label Backdoor AttackCode0
Exploring Vulnerabilities and Protections in Large Language Models: A Survey0
Mitigating Backdoor Attack by Injecting Proactive Defensive BackdoorCode0
Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning0
Generative AI in Cybersecurity: A Comprehensive Review of LLM Applications and Vulnerabilities0
Show:102550
← PrevPage 4 of 10Next →

No leaderboard results yet.