| On the Security Risks of AutoML | Oct 12, 2021 | AutoMLModel Poisoning | CodeCode Available | 0 | 5 |
| A Novel Defense Against Poisoning Attacks on Federated Learning: LayerCAM Augmented with Autoencoder | Jun 2, 2024 | Federated LearningModel Poisoning | CodeCode Available | 0 | 5 |
| FedSECA: Sign Election and Coordinate-wise Aggregation of Gradients for Byzantine Tolerant Federated Learning | Nov 6, 2024 | Federated LearningModel Poisoning | CodeCode Available | 0 | 5 |
| Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks | Mar 7, 2023 | Data PoisoningModel Poisoning | CodeCode Available | 0 | 5 |
| Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated Learning | Feb 8, 2025 | Anomaly DetectionFederated Learning | CodeCode Available | 0 | 5 |
| EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated Learning | Oct 2, 2024 | FairnessFederated Learning | CodeCode Available | 0 | 5 |
| Leverage Variational Graph Representation For Model Poisoning on Federated Learning | Apr 23, 2024 | Federated LearningModel Poisoning | CodeCode Available | 0 | 5 |
| Defending Against Sophisticated Poisoning Attacks with RL-based Aggregation in Federated Learning | Jun 20, 2024 | Federated LearningModel Poisoning | CodeCode Available | 0 | 5 |
| Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications | Jul 18, 2022 | Activity RecognitionAnomaly Detection | —Unverified | 0 | 0 |
| Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning | Apr 21, 2023 | Federated LearningModel Poisoning | —Unverified | 0 | 0 |