SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 51100 of 492 papers

TitleStatusHype
DROP: Poison Dilution via Knowledge Distillation for Federated LearningCode0
Filter, Obstruct and Dilute: Defending Against Backdoor Attacks on Semi-Supervised Learning0
Detection of Physiological Data Tampering Attacks with Quantum Machine Learning0
SoK: Benchmarking Poisoning Attacks and Defenses in Federated LearningCode2
Safety at Scale: A Comprehensive Survey of Large Model SafetyCode3
TrojanTime: Backdoor Attacks on Time Series Classification0
Provably effective detection of effective data poisoning attacks0
Poison-RAG: Adversarial Data Poisoning Attacks on Retrieval-Augmented Generation in Recommender SystemsCode0
A Novel Pearson Correlation-Based Merging Algorithm for Robust Distributed Machine Learning with Heterogeneous Data0
Fortifying Federated Learning Towards Trustworthiness via Auditable Data Valuation and Verifiable Client Contribution0
Cut the Deadwood Out: Post-Training Model Purification with Selective Module Substitution0
Attacks on the neural network and defense methods0
Trading Devil RL: Backdoor attack via Stock market, Bayesian Optimization and Reinforcement Learning0
From Vulnerabilities to Remediation: A Systematic Literature Review of LLMs in Code Security0
One Pixel is All I Need0
BiCert: A Bilinear Mixed Integer Programming Formulation for Precise Certified Bounds Against Data Poisoning Attacks0
Deep Learning Model Security: Threats and Defenses0
Learning to Forget using Hypernetworks0
Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models0
Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era0
Delta-Influence: Unlearning Poisons via Influence FunctionsCode0
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization0
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
SAFELOC: Overcoming Data Poisoning Attacks in Heterogeneous Federated Machine Learning for Indoor Localization0
Learning from Convolution-based Unlearnable DatasetsCode0
Reclaiming "Open AI" -- AI Model Serving Can Be Open Access, Yet Monetizable and Loyal0
Learning and Unlearning of Fabricated Knowledge in Language Models0
Inverting Gradient Attacks Makes Powerful Data Poisoning0
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions0
Regularized Robustly Reliable Learners and Instance Targeted Attacks0
Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning0
PoisonBench: Assessing Large Language Model Vulnerability to Data PoisoningCode1
Fragile Giants: Understanding the Susceptibility of Models to Subpopulation Attacks0
Data Taggants: Dataset Ownership Verification via Harmless Targeted Data Poisoning0
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning0
Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective0
Survey of Security and Data Attacks on Machine Unlearning In Financial and E-Commerce0
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks0
UTrace: Poisoning Forensics for Private Collaborative Learning0
SHFL: Secure Hierarchical Federated Learning Framework for Edge Networks0
Clean Label Attacks against SLU Systems0
Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks against RAG-based Inference in Scale and Severity Using JailbreakingCode0
Context is the Key: Backdoor Attacks for In-Context Learning with Vision Transformers0
Blockchain-based Federated Recommendation with Incentive Mechanism0
Protecting against simultaneous data poisoning attacks0
BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language ModelsCode3
Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender SystemsCode0
Unlearnable Examples Detection via Iterative Filtering0
Sonic: Fast and Transferable Data Poisoning on Clustering Algorithms0
2D-OOB: Attributing Data Contribution Through Joint Valuation FrameworkCode0
Show:102550
← PrevPage 2 of 10Next →

No leaderboard results yet.