Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks Jul 4, 2023 Data Poisoning Federated Learning
— Unverified 0What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners? Jul 3, 2023 Data Poisoning
— Unverified 0FedDefender: Backdoor Attack Defense in Federated Learning Jul 2, 2023 Backdoor Attack Data Poisoning
Code Code Available 1On the Exploitability of Instruction Tuning Jun 28, 2023 Data Poisoning Instruction Following
Code Code Available 1On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks Jun 28, 2023 Data Poisoning
— Unverified 0Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving Jun 27, 2023 Autonomous Driving Backdoor Attack
— Unverified 0OVLA: Neural Network Ownership Verification using Latent Watermarks Jun 15, 2023 Data Poisoning
— Unverified 0Data Poisoning to Fake a Nash Equilibrium in Markov Games Jun 13, 2023 Data Poisoning Multi-agent Reinforcement Learning
— Unverified 0FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users Jun 8, 2023 Data Poisoning Federated Learning
— Unverified 0Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization Jun 2, 2023 Bilevel Optimization Data Poisoning
— Unverified 0DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning Detection Jun 2, 2023 Data Poisoning
Code Code Available 1Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems May 31, 2023 Data Poisoning text-classification
— Unverified 0Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study May 28, 2023 Adversarial Robustness Backdoor Attack
— Unverified 0From Shortcuts to Triggers: Backdoor Defense with Denoised PoE May 24, 2023 backdoor defense Data Poisoning
Code Code Available 0Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models May 24, 2023 Continual Learning Data Poisoning
— Unverified 0Differentially-Private Decision Trees and Provable Robustness to Data Poisoning May 24, 2023 Data Poisoning
Code Code Available 0Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate Models May 23, 2023 Data Poisoning Language Modelling
Code Code Available 0Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoning May 18, 2023 Adversarial Robustness Data Poisoning
Code Code Available 3FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation May 9, 2023 Data Poisoning Federated Learning
— Unverified 0Evaluating Impact of User-Cluster Targeted Attacks in Matrix Factorisation Recommenders May 8, 2023 Data Poisoning Recommendation Systems
— Unverified 0Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks May 7, 2023 Data Poisoning image-classification
— Unverified 0Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning May 7, 2023 Backdoor Attack backdoor defense
Code Code Available 1Beyond the Model: Data Pre-processing Attack to Deep Learning Models in Android Apps May 6, 2023 Data Poisoning
— Unverified 0INK: Inheritable Natural Backdoor Attack Against Model Distillation Apr 21, 2023 Backdoor Attack Data Poisoning
— Unverified 0Interactive System-wise Anomaly Detection Apr 21, 2023 Anomaly Detection Data Poisoning
— Unverified 0Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning Apr 4, 2023 Data Poisoning Self-Supervised Learning
Code Code Available 1Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling Mar 30, 2023 Continual Learning Data Poisoning
Code Code Available 0Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm Mar 28, 2023 Adversarial Robustness Data Poisoning
— Unverified 0Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks Mar 27, 2023 Data Augmentation Data Poisoning
Code Code Available 1PORE: Provably Robust Recommender Systems against Data Poisoning Attacks Mar 26, 2023 Data Poisoning Recommendation Systems
Code Code Available 0Recursive Euclidean Distance Based Robust Aggregation Technique For Federated Learning Mar 20, 2023 Data Poisoning Federated Learning
— Unverified 0Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor Attacks Mar 13, 2023 Backdoor Attack Data Poisoning
Code Code Available 1Naive Bayes Classifiers over Missing Data: Decision and Poisoning Mar 8, 2023 Data Poisoning Missing Values
Code Code Available 0Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks Mar 7, 2023 Data Poisoning Model Poisoning
Code Code Available 0CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning Mar 6, 2023 Backdoor Attack Contrastive Learning
Code Code Available 1WW-FL: Secure and Private Large-Scale Federated Learning Feb 20, 2023 Data Poisoning Federated Learning
— Unverified 0Poisoning Web-Scale Training Datasets is Practical Feb 20, 2023 Data Poisoning
Code Code Available 1QTrojan: A Circuit Backdoor Against Quantum Neural Networks Feb 16, 2023 Backdoor Attack Data Poisoning
— Unverified 0Explainable Label-flipping Attacks on Human Emotion Assessment System Feb 8, 2023 Data Poisoning EEG
— Unverified 0Data Poisoning Attacks on EEG Signal-based Risk Assessment Systems Feb 8, 2023 Data Poisoning EEG
— Unverified 0Training-free Lexical Backdoor Attacks on Language Models Feb 8, 2023 Backdoor Attack Data Poisoning
Code Code Available 0Temporal Robustness against Data Poisoning Feb 7, 2023 Data Poisoning
— Unverified 0Run-Off Election: Improved Provable Defense against Data Poisoning Attacks Feb 5, 2023 Data Poisoning
Code Code Available 0CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications Feb 1, 2023 Data Poisoning Decoder
— Unverified 0Face Recognition in the age of CLIP & Billion image datasets Jan 18, 2023 Data Poisoning Face Recognition
— Unverified 0Explainable Data Poison Attacks on Human Emotion Evaluation Systems based on EEG Signals Jan 17, 2023 Data Poisoning EEG
Code Code Available 0Federated Transfer-Ordered-Personalized Learning for Driver Monitoring Application Jan 12, 2023 Data Poisoning Federated Learning
— Unverified 0TrojanPuzzle: Covertly Poisoning Code-Suggestion Models Jan 6, 2023 Data Poisoning
Code Code Available 1Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor Attack Jan 5, 2023 Backdoor Attack Data Poisoning
Code Code Available 1Computation and Data Efficient Backdoor Attacks Jan 1, 2023 3D Point Cloud Classification Data Poisoning
— Unverified 0