SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 51100 of 492 papers

TitleStatusHype
Defending Against Patch-based Backdoor Attacks on Self-Supervised LearningCode1
Data Poisoning Attacks Against Multimodal EncodersCode1
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data PoisoningCode1
Auditing Differentially Private Machine Learning: How Private is Private SGD?Code1
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
Autoregressive Perturbations for Data PoisoningCode1
Adversarial Robustness of Representation Learning for Knowledge GraphsCode1
Poisoning Web-Scale Training Datasets is PracticalCode1
A Distributed Trust Framework for Privacy-Preserving Machine LearningCode1
Not All Poisons are Created Equal: Robust Training against Data PoisoningCode1
Backdoor Attacks for Remote Sensing Data with Wavelet TransformCode1
Backdoor Attacks on Crowd CountingCode1
Data Poisoning Attacks Against Federated Learning SystemsCode1
Data Poisoning Attacks on Regression Learning and Corresponding DefensesCode1
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data AugmentationsCode1
Availability Attacks Create ShortcutsCode1
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based ModelsCode1
Data Poisoning in Deep Learning: A SurveyCode1
Data Poisoning Won't Save You From Facial RecognitionCode1
Dynamic Defense Against Byzantine Poisoning Attacks in Federated LearningCode1
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew ResilienceCode1
FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited KnowledgeCode1
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning AttacksCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP ModelsCode1
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning AttacksCode1
Stronger Data Poisoning Attacks Break Data Sanitization DefensesCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Adversarial Vulnerability of Active Transfer Learning0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation0
Adversarial Threat Vectors and Risk Mitigation for Retrieval-Augmented Generation Systems0
TED-LaST: Towards Robust Backdoor Defense Against Adaptive Attacks0
Backdoor Attack and Defense for Deep Regression0
Compression-Resistant Backdoor Attack against Deep Neural Networks0
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning0
Active Learning Under Malicious Mislabeling and Poisoning Attacks0
Computation and Data Efficient Backdoor Attacks0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Attacks on the neural network and defense methods0
Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors0
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions0
Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era0
Towards Robust Spiking Neural Networks:Mitigating Heterogeneous Training Vulnerability via Dominant Eigencomponent Projection0
Model Hijacking Attack in Federated Learning0
Atlas: A Framework for ML Lifecycle Provenance & Transparency0
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks0
Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing0
Show:102550
← PrevPage 2 of 10Next →

No leaderboard results yet.