Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors Feb 20, 2024 Data Poisoning Domain Adaptation
— Unverified 0Purifying Large Language Models by Ensembling a Small Language Model Feb 19, 2024 Data Poisoning Language Modeling
— Unverified 0Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents Feb 17, 2024 Backdoor Attack backdoor defense
Code Code Available 2SusFL: Energy-Aware Federated Learning-based Monitoring for Sustainable Smart Farms Feb 15, 2024 Data Poisoning Federated Learning
— Unverified 0Review-Incorporated Model-Agnostic Profile Injection Attacks on Recommender Systems Feb 14, 2024 Data Poisoning Generative Adversarial Network
— Unverified 0The Effect of Data Poisoning on Counterfactual Explanations Feb 13, 2024 counterfactual Data Poisoning
Code Code Available 0Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models Feb 5, 2024 Data Augmentation Data Poisoning
Code Code Available 2Game-Theoretic Unlearnable Example Generator Jan 31, 2024 Data Poisoning
Code Code Available 0Security and Privacy Challenges of Large Language Models: A Survey Jan 30, 2024 Data Poisoning Question Answering
— Unverified 0Federated Learning with Dual Attention for Robust Modulation Classification under Attacks Jan 19, 2024 Data Poisoning Federated Learning
— Unverified 0A GAN-based data poisoning framework against anomaly detection in vertical federated learning Jan 17, 2024 Anomaly Detection Data Poisoning
— Unverified 0The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline Jan 7, 2024 Backdoor Attack Data Poisoning
— Unverified 0Data-Dependent Stability Analysis of Adversarial Training Jan 6, 2024 Data Poisoning Generalization Bounds
— Unverified 0Revamping Federated Learning Security from a Defender's Perspective: A Unified Defense with Homomorphic Encrypted Data Space Jan 1, 2024 Data Poisoning Federated Learning
— Unverified 0Data Poisoning based Backdoor Attacks to Contrastive Learning Jan 1, 2024 Contrastive Learning Data Poisoning
Code Code Available 1SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection Dec 30, 2023 Autonomous Driving Backdoor Attack
— Unverified 0Adversarial Data Poisoning for Fake News Detection: How to Make a Model Misclassify a Target News without Modifying It Dec 23, 2023 Data Poisoning Fake News Detection
— Unverified 0Balancing Privacy, Robustness, and Efficiency in Machine Learning Dec 22, 2023 Computational Efficiency Data Poisoning
— Unverified 0Progressive Poisoned Data Isolation for Training-time Backdoor Defense Dec 20, 2023 backdoor defense Data Poisoning
Code Code Available 0TrojFSP: Trojan Insertion in Few-shot Prompt Tuning Dec 16, 2023 Data Poisoning Language Modelling
— Unverified 0FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited Knowledge Dec 15, 2023 Backdoor Attack Data Poisoning
Code Code Available 1Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey Dec 14, 2023 Data Poisoning Federated Learning
— Unverified 0Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks Dec 7, 2023 Data Poisoning object-detection
— Unverified 0FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against Adversarial Attacks Dec 4, 2023 Data Poisoning Federated Learning
— Unverified 0Mendata: A Framework to Purify Manipulated Training Data Dec 3, 2023 Data Poisoning
— Unverified 0Universal Backdoor Attacks Nov 30, 2023 Data Poisoning
Code Code Available 0IMMA: Immunizing text-to-image Models against Malicious Adaptation Nov 30, 2023 Data Poisoning TAG
Code Code Available 1Privacy and Copyright Protection in Generative AI: A Lifecycle Perspective Nov 30, 2023 Data Poisoning Machine Unlearning
— Unverified 0Trainwreck: A damaging adversarial attack on image classifiers Nov 24, 2023 Adversarial Attack Data Poisoning
Code Code Available 0Security and Privacy Challenges in Deep Learning Models Nov 23, 2023 Autonomous Driving Data Poisoning
— Unverified 0BrainWash: A Poisoning Attack to Forget in Continual Learning Nov 20, 2023 Continual Learning Data Poisoning
— Unverified 0Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems Nov 20, 2023 Autonomous Driving Autonomous Vehicles
— Unverified 0PACOL: Poisoning Attacks Against Continual Learners Nov 18, 2023 Continual Learning Data Poisoning
— Unverified 0RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models Nov 16, 2023 Backdoor Attack Data Poisoning
— Unverified 0From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models Nov 4, 2023 Backdoor Attack backdoor defense
Code Code Available 0Reputation-Based Federated Learning Defense to Mitigate Threats in EEG Signal Classification Oct 22, 2023 Brain Computer Interface Data Poisoning
— Unverified 0Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models Oct 20, 2023 Data Poisoning
— Unverified 0PrivacyGAN: robust generative image privacy Oct 19, 2023 Data Poisoning Image Generation
— Unverified 0Histopathological Image Classification and Vulnerability Analysis using Federated Learning Oct 11, 2023 Classification Data Poisoning
— Unverified 0Transferable Availability Poisoning Attacks Oct 8, 2023 Contrastive Learning Data Poisoning
Code Code Available 0Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks Oct 5, 2023 Contrastive Learning Data Poisoning
Code Code Available 0Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning Oct 5, 2023 Data Poisoning
— Unverified 0Post-Training Overfitting Mitigation in DNN Classifiers Sep 28, 2023 Data Poisoning Diversity
— Unverified 0Towards Poisoning Fair Representations Sep 28, 2023 Bilevel Optimization Data Poisoning
— Unverified 0Seeing Is Not Always Believing: Invisible Collision Attack and Defence on Pre-Trained Models Sep 24, 2023 Data Poisoning
Code Code Available 0HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks Sep 15, 2023 Data Poisoning
Code Code Available 0CyberForce: A Federated Reinforcement Learning Framework for Malware Mitigation Aug 11, 2023 Anomaly Detection Data Poisoning
— Unverified 0Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks Aug 4, 2023 Code Generation Data Poisoning
Code Code Available 1Systematic Testing of the Data-Poisoning Robustness of KNN Jul 17, 2023 Data Poisoning
— Unverified 0Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy Jul 14, 2023 Backdoor Attack Data Poisoning
— Unverified 0