SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 251300 of 492 papers

TitleStatusHype
Regularized Robustly Reliable Learners and Instance Targeted Attacks0
Reinforcement Learning For Data Poisoning on Graph Neural Networks0
Releasing Malevolence from Benevolence: The Menace of Benign Data on Machine Unlearning0
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization0
Reputation-Based Federated Learning Defense to Mitigate Threats in EEG Signal Classification0
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning0
Revamping Federated Learning Security from a Defender's Perspective: A Unified Defense with Homomorphic Encrypted Data Space0
Detection of Backdoors in Trained Classifiers Without Access to the Training Set0
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic0
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing0
Review-Incorporated Model-Agnostic Profile Injection Attacks on Recommender Systems0
Robust Federated Training via Collaborative Machine Teaching using Trusted Instances0
Robust learning under clean-label attack0
Robustly-reliable learners under poisoning attacks0
Robust Variational Autoencoder for Tabular Data with Beta Divergence0
SAFELOC: Overcoming Data Poisoning Attacks in Heterogeneous Federated Machine Learning for Indoor Localization0
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning0
Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification0
Securing Traffic Sign Recognition Systems in Autonomous Vehicles0
Security and Privacy Challenges in Deep Learning Models0
Security and Privacy Challenges of Large Language Models: A Survey0
Security Concerns for Large Language Models: A Survey0
Security of Distributed Machine Learning: A Game-Theoretic Approach to Design Secure DSVM0
SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks0
Self-Adaptive and Robust Federated Spectrum Sensing without Benign Majority for Cellular Networks0
Shapley Homology: Topological Analysis of Sample Influence for Neural Networks0
SHFL: Secure Hierarchical Federated Learning Framework for Edge Networks0
Silent Branding Attack: Trigger-free Data Poisoning Attack on Text-to-Image Diffusion Models0
Sky of Unlearning (SoUL): Rewiring Federated Machine Unlearning via Selective Pruning0
Sniper GMMs: Structured Gaussian mixtures poison ML on large n small p data with high efficacy0
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks0
Sonic: Fast and Transferable Data Poisoning on Clustering Algorithms0
Spectrum Data Poisoning with Adversarial Deep Learning0
Sself: Robust Federated Learning against Stragglers and Adversaries0
SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection0
Stealthy LLM-Driven Data Poisoning Attacks Against Embedding-Based Retrieval-Augmented Recommender Systems0
Survey of Security and Data Attacks on Machine Unlearning In Financial and E-Commerce0
SusFL: Energy-Aware Federated Learning-based Monitoring for Sustainable Smart Farms0
Swallowing the Poison Pills: Insights from Vulnerability Disparity Among LLMs0
Sybil-based Virtual Data Poisoning Attacks in Federated Learning0
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers0
Systematic Testing of the Data-Poisoning Robustness of KNN0
Targeted Data Poisoning Attack on News Recommendation System by Content Perturbation0
Targeted Data Poisoning for Black-Box Audio Datasets Ownership Verification0
A Targeted Attack on Black-Box Neural Machine Translation with Parallel Data Poisoning0
Temporal Robustness against Data Poisoning0
The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures0
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline0
Data Poisoning Attack against Knowledge Graph Embedding0
Towards Multi-Objective Statistically Fair Federated Learning0
Show:102550
← PrevPage 6 of 10Next →

No leaderboard results yet.