| DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning | Nov 19, 2024 | Adversarial AttackBackdoor Attack | —Unverified | 0 |
| Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning | May 10, 2024 | Backdoor AttackData Poisoning | —Unverified | 0 |
| Turning Federated Learning Systems Into Covert Channels | Apr 21, 2021 | Federated LearningModel Poisoning | —Unverified | 0 |
| Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization | Jan 28, 2021 | Federated LearningModel Poisoning | —Unverified | 0 |
| Data-Agnostic Model Poisoning against Federated Learning: A Graph Autoencoder Approach | Nov 30, 2023 | Federated LearningModel Poisoning | —Unverified | 0 |
| Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey | Dec 14, 2023 | Data PoisoningFederated Learning | —Unverified | 0 |
| Anticipatory Thinking Challenges in Open Worlds: Risk Management | Jun 22, 2023 | Adversarial RobustnessAutonomous Vehicles | —Unverified | 0 |
| Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning | Apr 21, 2023 | Federated LearningModel Poisoning | —Unverified | 0 |
| CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications | Feb 1, 2023 | Data PoisoningDecoder | —Unverified | 0 |
| Can We Trust the Similarity Measurement in Federated Learning? | Oct 20, 2023 | Federated LearningModel Poisoning | —Unverified | 0 |