SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 201225 of 492 papers

TitleStatusHype
Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks0
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?0
FedDefender: Backdoor Attack Defense in Federated LearningCode1
On the Exploitability of Instruction TuningCode1
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
OVLA: Neural Network Ownership Verification using Latent Watermarks0
Data Poisoning to Fake a Nash Equilibrium in Markov Games0
FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users0
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization0
DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning DetectionCode1
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems0
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study0
From Shortcuts to Triggers: Backdoor Defense with Denoised PoECode0
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models0
Differentially-Private Decision Trees and Provable Robustness to Data PoisoningCode0
Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate ModelsCode0
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoningCode3
FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation0
Evaluating Impact of User-Cluster Targeted Attacks in Matrix Factorisation Recommenders0
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks0
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data PoisoningCode1
Beyond the Model: Data Pre-processing Attack to Deep Learning Models in Android Apps0
INK: Inheritable Natural Backdoor Attack Against Model Distillation0
Interactive System-wise Anomaly Detection0
Show:102550
← PrevPage 9 of 20Next →

No leaderboard results yet.