INK: Inheritable Natural Backdoor Attack Against Model Distillation Apr 21, 2023 Backdoor Attack Data Poisoning
— Unverified 0Learning and Unlearning of Fabricated Knowledge in Language Models Oct 29, 2024 Data Poisoning Language Modeling
— Unverified 0Learning to Forget using Hypernetworks Dec 1, 2024 Data Poisoning Machine Unlearning
— Unverified 0Local Model Poisoning Attacks to Byzantine-Robust Federated Learning Nov 26, 2019 BIG-bench Machine Learning Data Poisoning
— Unverified 0Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest Sep 27, 2019 Attribute Data Poisoning
— Unverified 0Mendata: A Framework to Purify Manipulated Training Data Dec 3, 2023 Data Poisoning
— Unverified 0Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification Jul 11, 2020 Classification Data Poisoning
— Unverified 0Mitigating Data Poisoning in Text Classification with Differential Privacy Nov 1, 2021 Classification Data Poisoning
— Unverified 0Mitigating the Impact of Adversarial Attacks in Very Deep Networks Dec 8, 2020 Data Poisoning
— Unverified 0Mixed Strategy Game Model Against Data Poisoning Attacks Jun 7, 2019 Data Poisoning Model Poisoning
— Unverified 0ML Attack Models: Adversarial Attacks and Data Poisoning Attacks Dec 6, 2021 Adversarial Attack Data Poisoning
— Unverified 0How to Backdoor HyperNetwork in Personalized Federated Learning? Jan 18, 2022 Data Poisoning Federated Learning
— Unverified 0Multi-Trigger Poisoning Amplifies Backdoor Vulnerabilities in LLMs Jul 15, 2025 Data Poisoning
— Unverified 0Privacy and Copyright Protection in Generative AI: A Lifecycle Perspective Nov 30, 2023 Data Poisoning Machine Unlearning
— Unverified 0Neural network fragile watermarking with no model performance degradation Aug 16, 2022 Data Poisoning
— Unverified 0Neuromimetic metaplasticity for adaptive continual learning Jul 9, 2024 Continual Learning Data Poisoning
— Unverified 0No, of course I can! Refusal Mechanisms Can Be Exploited Using Harmless Fine-Tuning Data Feb 26, 2025 Data Poisoning
— Unverified 0Reclaiming "Open AI" -- AI Model Serving Can Be Open Access, Yet Monetizable and Loyal Nov 1, 2024 Data Poisoning
— Unverified 0On Defending Against Label Flipping Attacks on Malware Detection Systems Aug 13, 2019 Android Malware Detection BIG-bench Machine Learning
— Unverified 0One Pixel is All I Need Dec 14, 2024 All Data Poisoning
— Unverified 0Data Poisoning to Fake a Nash Equilibrium in Markov Games Jun 13, 2023 Data Poisoning Multi-agent Reinforcement Learning
— Unverified 0Online Data Poisoning Attack Mar 5, 2019 Data Poisoning Deep Reinforcement Learning
— Unverified 0Online Data Poisoning Attacks Jun 8, 2020 Data Poisoning Deep Reinforcement Learning
— Unverified 0On Optimal Learning Under Targeted Data Poisoning Oct 6, 2022 Data Poisoning
— Unverified 0On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks Jun 28, 2023 Data Poisoning
— Unverified 0On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning Oct 7, 2024 Data Poisoning Test-time Adaptation
— Unverified 0On the Effectiveness of Poisoning against Unsupervised Domain Adaptation Jun 18, 2021 Data Poisoning Domain Adaptation
— Unverified 0RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models Nov 16, 2023 Backdoor Attack Data Poisoning
— Unverified 0On the Relevance of Byzantine Robust Optimization Against Data Poisoning May 1, 2024 Autonomous Driving Data Poisoning
— Unverified 0On the Robustness of Graph Reduction Against GNN Backdoor Jul 2, 2024 Computational Efficiency Data Poisoning
— Unverified 0A Study of Backdoors in Instruction Fine-tuned Language Models Jun 12, 2024 Data Poisoning Language Modelling
— Unverified 0Open Challenges in Multi-Agent Security: Towards Secure Systems of Interacting AI Agents May 4, 2025 Data Poisoning
— Unverified 0Optimizing ML Training with Metagradient Descent Mar 17, 2025 Data Poisoning
— Unverified 0Oriole: Thwarting Privacy against Trustworthy Deep Learning Models Feb 23, 2021 Data Poisoning Deep Learning
— Unverified 0OVLA: Neural Network Ownership Verification using Latent Watermarks Jun 15, 2023 Data Poisoning
— Unverified 0PACOL: Poisoning Attacks Against Continual Learners Nov 18, 2023 Continual Learning Data Poisoning
— Unverified 0Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning Jul 13, 2024 Data Poisoning Federated Learning
— Unverified 0Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks May 7, 2023 Data Poisoning image-classification
— Unverified 0PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks Mar 24, 2020 Data Poisoning
— Unverified 0PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning May 13, 2022 Bilevel Optimization Contrastive Learning
— Unverified 0PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models Mar 10, 2025 Data Poisoning
— Unverified 0Poisoning Attacks and Defenses on Artificial Intelligence: A Survey Feb 21, 2022 Data Poisoning Survey
— Unverified 0Poisoning Attacks to Local Differential Privacy Protocols for Trajectory Data Mar 6, 2025 Data Poisoning
— Unverified 0Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers Jun 14, 2021 Data Poisoning Deep Reinforcement Learning
— Unverified 0Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code Mar 11, 2024 Code Generation Data Poisoning
— Unverified 0Policy Teaching via Data Poisoning in Learning from Human Preferences Mar 13, 2025 Data Poisoning
— Unverified 0Post-Training Overfitting Mitigation in DNN Classifiers Sep 28, 2023 Data Poisoning Diversity
— Unverified 0Practical Data Poisoning Attack against Next-Item Recommendation Apr 7, 2020 Data Poisoning Recommendation Systems
— Unverified 0SLSGD: Secure and Efficient Distributed On-device Machine Learning Mar 16, 2019 BIG-bench Machine Learning Data Poisoning
— Unverified 0Practical Poisoning Attacks on Neural Networks Aug 1, 2020 Data Poisoning
— Unverified 0