Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning Dec 5, 2022 Data Poisoning
— Unverified 0Data Poisoning Attack Aiming the Vulnerability of Continual Learning Nov 29, 2022 Adversarial Attack Continual Learning
— Unverified 0Backdoor Vulnerabilities in Normally Trained Deep Learning Models Nov 29, 2022 Data Poisoning Deep Learning
— Unverified 0Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners Nov 23, 2022 Data Poisoning Meta-Learning
— Unverified 0Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical Systems Nov 16, 2022 Data Poisoning
Code Code Available 0FLock: Defending Malicious Behaviors in Federated Learning with Blockchain Nov 5, 2022 Data Poisoning Federated Learning
— Unverified 0Try to Avoid Attacks: A Federated Data Sanitization Defense for Healthcare IoMT Systems Nov 3, 2022 Clustering Data Poisoning
— Unverified 0FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification Oct 25, 2022 Adversarial Robustness Data Poisoning
— Unverified 0Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario Oct 20, 2022 Data Poisoning Federated Learning
— Unverified 0Training set cleansing of backdoor poisoning by self-supervised representation learning Oct 19, 2022 Data Poisoning image-classification
— Unverified 0Detecting Backdoors in Deep Text Classifiers Oct 11, 2022 Data Poisoning text-classification
— Unverified 0On Optimal Learning Under Targeted Data Poisoning Oct 6, 2022 Data Poisoning
— Unverified 0Understanding Influence Functions and Datamodels via Harmonic Analysis Oct 3, 2022 Data Poisoning
— Unverified 0On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based Approach Sep 28, 2022 Data Poisoning Decision Making
Code Code Available 0Defend Data Poisoning Attacks on Voice Authentication Sep 9, 2022 Data Poisoning Ensemble Learning
— Unverified 0FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning Aug 25, 2022 Backdoor Attack Data Poisoning
— Unverified 0Do-AIQ: A Design-of-Experiment Approach to Quality Evaluation of AI Mislabel Detection Algorithm Aug 21, 2022 Autonomous Driving Data Poisoning
— Unverified 0Label Flipping Data Poisoning Attack Against Wearable Human Activity Recognition System Aug 17, 2022 Activity Recognition Data Poisoning
— Unverified 0Neural network fragile watermarking with no model performance degradation Aug 16, 2022 Data Poisoning
— Unverified 0Lethal Dose Conjecture on Data Poisoning Aug 5, 2022 Data Poisoning
Code Code Available 0Testing the Robustness of Learned Index Structures Jul 23, 2022 Data Poisoning regression
Code Code Available 0Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications Jul 18, 2022 Activity Recognition Anomaly Detection
— Unverified 0Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain Jul 9, 2022 Backdoor Attack Data Poisoning
— Unverified 0Backdoor Attack is a Devil in Federated GAN-based Medical Image Synthesis Jul 2, 2022 Backdoor Attack Data Poisoning
Code Code Available 0Efficient Reward Poisoning Attacks on Online Deep Reinforcement Learning May 30, 2022 Data Poisoning Deep Reinforcement Learning
Code Code Available 0BagFlip: A Certified Defense against Data Poisoning May 26, 2022 Backdoor Attack Data Poisoning
Code Code Available 0SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning May 20, 2022 Backdoor Attack BIG-bench Machine Learning
— Unverified 0PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning May 13, 2022 Bilevel Optimization Contrastive Learning
— Unverified 0Federated Multi-Armed Bandits Under Byzantine Attacks May 9, 2022 Data Poisoning Decision Making
— Unverified 0VPN: Verification of Poisoning in Neural Networks May 8, 2022 Data Poisoning image-classification
— Unverified 0Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning May 4, 2022 BIG-bench Machine Learning Data Poisoning
— Unverified 0GFCL: A GRU-based Federated Continual Learning Framework against Data Poisoning Attacks in IoV Apr 23, 2022 Anomaly Detection Continual Learning
— Unverified 0Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy Apr 22, 2022 Data Poisoning Federated Learning
— Unverified 0Indiscriminate Data Poisoning Attacks on Neural Networks Apr 19, 2022 Data Poisoning
Code Code Available 0Breaking Fair Binary Classification with Optimal Flipping Attacks Apr 12, 2022 Binary Classification Classification
— Unverified 0Machine Learning Security against Data Poisoning: Are We There Yet? Apr 12, 2022 BIG-bench Machine Learning Data Poisoning
Code Code Available 0Robustly-reliable learners under poisoning attacks Mar 8, 2022 Data Poisoning
— Unverified 0Targeted Data Poisoning Attack on News Recommendation System by Content Perturbation Mar 4, 2022 Data Poisoning News Recommendation
— Unverified 0Poisoning Attacks and Defenses on Artificial Intelligence: A Survey Feb 21, 2022 Data Poisoning Survey
— Unverified 0Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy Feb 21, 2022 Data Poisoning Graph Classification
— Unverified 0Collaborative Self Organizing Map with DeepNNs for Fake Task Prevention in Mobile Crowdsensing Feb 17, 2022 Data Poisoning
— Unverified 0An Equivalence Between Data Poisoning and Byzantine Gradient Attacks Feb 17, 2022 Data Poisoning Federated Learning
Code Code Available 0Redactor: A Data-centric and Individualized Defense Against Inference Attacks Feb 7, 2022 Data Poisoning
— Unverified 0Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation Feb 5, 2022 Data Poisoning
Code Code Available 0Towards Multi-Objective Statistically Fair Federated Learning Jan 24, 2022 Data Poisoning Fairness
— Unverified 0How to Backdoor HyperNetwork in Personalized Federated Learning? Jan 18, 2022 Data Poisoning Federated Learning
— Unverified 0Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of Robustness Jan 5, 2022 Data Poisoning Federated Learning
Code Code Available 0Compression-Resistant Backdoor Attack against Deep Neural Networks Jan 3, 2022 Backdoor Attack Data Poisoning
— Unverified 0Execute Order 66: Targeted Data Poisoning for Reinforcement Learning Jan 3, 2022 Atari Games Data Poisoning
— Unverified 0ML Attack Models: Adversarial Attacks and Data Poisoning Attacks Dec 6, 2021 Adversarial Attack Data Poisoning
— Unverified 0