SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 201250 of 492 papers

TitleStatusHype
Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks0
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?0
FedDefender: Backdoor Attack Defense in Federated LearningCode1
On the Exploitability of Instruction TuningCode1
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
OVLA: Neural Network Ownership Verification using Latent Watermarks0
Data Poisoning to Fake a Nash Equilibrium in Markov Games0
FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users0
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization0
DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning DetectionCode1
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models0
Differentially-Private Decision Trees and Provable Robustness to Data PoisoningCode0
Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate ModelsCode0
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoningCode3
FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation0
Evaluating Impact of User-Cluster Targeted Attacks in Matrix Factorisation Recommenders0
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks0
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data PoisoningCode1
Beyond the Model: Data Pre-processing Attack to Deep Learning Models in Android Apps0
INK: Inheritable Natural Backdoor Attack Against Model Distillation0
Interactive System-wise Anomaly Detection0
Defending Against Patch-based Backdoor Attacks on Self-Supervised LearningCode1
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch SamplingCode0
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm0
Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example AttacksCode1
PORE: Provably Robust Recommender Systems against Data Poisoning AttacksCode0
Recursive Euclidean Distance Based Robust Aggregation Technique For Federated Learning0
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor AttacksCode1
Naive Bayes Classifiers over Missing Data: Decision and PoisoningCode0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
WW-FL: Secure and Private Large-Scale Federated Learning0
Poisoning Web-Scale Training Datasets is PracticalCode1
QTrojan: A Circuit Backdoor Against Quantum Neural Networks0
Explainable Label-flipping Attacks on Human Emotion Assessment System0
Data Poisoning Attacks on EEG Signal-based Risk Assessment Systems0
Training-free Lexical Backdoor Attacks on Language ModelsCode0
Temporal Robustness against Data Poisoning0
Run-Off Election: Improved Provable Defense against Data Poisoning AttacksCode0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Face Recognition in the age of CLIP & Billion image datasets0
Explainable Data Poison Attacks on Human Emotion Evaluation Systems based on EEG SignalsCode0
Federated Transfer-Ordered-Personalized Learning for Driver Monitoring Application0
TrojanPuzzle: Covertly Poisoning Code-Suggestion ModelsCode1
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor AttackCode1
Computation and Data Efficient Backdoor Attacks0
Show:102550
← PrevPage 5 of 10Next →

No leaderboard results yet.