Fed-Credit: Robust Federated Learning with Credibility Management May 20, 2024 Data Poisoning Federated Learning
— Unverified 0SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks May 19, 2024 Data Poisoning
— Unverified 0Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning May 10, 2024 Backdoor Attack Data Poisoning
— Unverified 0Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search May 9, 2024 Data Poisoning Neural Architecture Search
— Unverified 0On the Relevance of Byzantine Robust Optimization Against Data Poisoning May 1, 2024 Autonomous Driving Data Poisoning
— Unverified 0Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning Apr 22, 2024 Backdoor Attack Data Poisoning
— Unverified 0Data Poisoning Attacks on Off-Policy Policy Evaluation Methods Apr 6, 2024 Data Poisoning Off-policy evaluation
— Unverified 0Precision Guided Approach to Mitigate Data Poisoning Attacks in Federated Learning Apr 5, 2024 Data Poisoning Federated Learning
— Unverified 0Two Heads are Better than One: Nested PoE for Robust Defense Against Multi-Backdoors Apr 2, 2024 Data Poisoning Hate Speech Detection
Code Code Available 0A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks Mar 29, 2024 Backdoor Attack Data Poisoning
— Unverified 0Have You Poisoned My Data? Defending Neural Networks against Data Poisoning Mar 20, 2024 Data Poisoning Transfer Learning
— Unverified 0Nonsmooth Implicit Differentiation: Deterministic and Stochastic Convergence Rates Mar 18, 2024 Data Poisoning Hyperparameter Optimization
Code Code Available 0Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code Mar 11, 2024 Code Generation Data Poisoning
— Unverified 0Don't Forget What I did?: Assessing Client Contributions in Federated Learning Mar 11, 2024 Data Poisoning Fairness
— Unverified 0Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer Networks Mar 5, 2024 Data Poisoning Federated Learning
Code Code Available 0Breaking Down the Defenses: A Comparative Survey of Attacks on Large Language Models Mar 3, 2024 Data Poisoning
— Unverified 0Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors Feb 20, 2024 Data Poisoning Domain Adaptation
— Unverified 0Purifying Large Language Models by Ensembling a Small Language Model Feb 19, 2024 Data Poisoning Language Modeling
— Unverified 0SusFL: Energy-Aware Federated Learning-based Monitoring for Sustainable Smart Farms Feb 15, 2024 Data Poisoning Federated Learning
— Unverified 0Review-Incorporated Model-Agnostic Profile Injection Attacks on Recommender Systems Feb 14, 2024 Data Poisoning Generative Adversarial Network
— Unverified 0The Effect of Data Poisoning on Counterfactual Explanations Feb 13, 2024 counterfactual Data Poisoning
Code Code Available 0Game-Theoretic Unlearnable Example Generator Jan 31, 2024 Data Poisoning
Code Code Available 0Security and Privacy Challenges of Large Language Models: A Survey Jan 30, 2024 Data Poisoning Question Answering
— Unverified 0Federated Learning with Dual Attention for Robust Modulation Classification under Attacks Jan 19, 2024 Data Poisoning Federated Learning
— Unverified 0A GAN-based data poisoning framework against anomaly detection in vertical federated learning Jan 17, 2024 Anomaly Detection Data Poisoning
— Unverified 0The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline Jan 7, 2024 Backdoor Attack Data Poisoning
— Unverified 0Data-Dependent Stability Analysis of Adversarial Training Jan 6, 2024 Data Poisoning Generalization Bounds
— Unverified 0Revamping Federated Learning Security from a Defender's Perspective: A Unified Defense with Homomorphic Encrypted Data Space Jan 1, 2024 Data Poisoning Federated Learning
— Unverified 0SSL-OTA: Unveiling Backdoor Threats in Self-Supervised Learning for Object Detection Dec 30, 2023 Autonomous Driving Backdoor Attack
— Unverified 0Adversarial Data Poisoning for Fake News Detection: How to Make a Model Misclassify a Target News without Modifying It Dec 23, 2023 Data Poisoning Fake News Detection
— Unverified 0Balancing Privacy, Robustness, and Efficiency in Machine Learning Dec 22, 2023 Computational Efficiency Data Poisoning
— Unverified 0Progressive Poisoned Data Isolation for Training-time Backdoor Defense Dec 20, 2023 backdoor defense Data Poisoning
Code Code Available 0TrojFSP: Trojan Insertion in Few-shot Prompt Tuning Dec 16, 2023 Data Poisoning Language Modelling
— Unverified 0Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey Dec 14, 2023 Data Poisoning Federated Learning
— Unverified 0Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks Dec 7, 2023 Data Poisoning object-detection
— Unverified 0FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against Adversarial Attacks Dec 4, 2023 Data Poisoning Federated Learning
— Unverified 0Mendata: A Framework to Purify Manipulated Training Data Dec 3, 2023 Data Poisoning
— Unverified 0Universal Backdoor Attacks Nov 30, 2023 Data Poisoning
Code Code Available 0Privacy and Copyright Protection in Generative AI: A Lifecycle Perspective Nov 30, 2023 Data Poisoning Machine Unlearning
— Unverified 0Trainwreck: A damaging adversarial attack on image classifiers Nov 24, 2023 Adversarial Attack Data Poisoning
Code Code Available 0Security and Privacy Challenges in Deep Learning Models Nov 23, 2023 Autonomous Driving Data Poisoning
— Unverified 0Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems Nov 20, 2023 Autonomous Driving Autonomous Vehicles
— Unverified 0BrainWash: A Poisoning Attack to Forget in Continual Learning Nov 20, 2023 Continual Learning Data Poisoning
— Unverified 0PACOL: Poisoning Attacks Against Continual Learners Nov 18, 2023 Continual Learning Data Poisoning
— Unverified 0RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models Nov 16, 2023 Backdoor Attack Data Poisoning
— Unverified 0From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models Nov 4, 2023 Backdoor Attack backdoor defense
Code Code Available 0Reputation-Based Federated Learning Defense to Mitigate Threats in EEG Signal Classification Oct 22, 2023 Brain Computer Interface Data Poisoning
— Unverified 0Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models Oct 20, 2023 Data Poisoning
— Unverified 0PrivacyGAN: robust generative image privacy Oct 19, 2023 Data Poisoning Image Generation
— Unverified 0Histopathological Image Classification and Vulnerability Analysis using Federated Learning Oct 11, 2023 Classification Data Poisoning
— Unverified 0