| PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy | Oct 22, 2021 | Federated LearningModel Poisoning | —Unverified | 0 |
| PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion | Oct 21, 2021 | Federated LearningModel Poisoning | —Unverified | 0 |
| TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks | Oct 19, 2021 | Federated LearningModel Poisoning | —Unverified | 0 |
| On the Security Risks of AutoML | Oct 12, 2021 | AutoMLModel Poisoning | CodeCode Available | 0 |
| A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples | Sep 3, 2021 | Federated LearningModel Poisoning | —Unverified | 0 |
| Turning Federated Learning Systems Into Covert Channels | Apr 21, 2021 | Federated LearningModel Poisoning | —Unverified | 0 |
| FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning | Apr 16, 2021 | Data PoisoningFederated Learning | —Unverified | 0 |
| SAFELearning: Enable Backdoor Detectability In Federated Learning With Secure Aggregation | Feb 4, 2021 | Anomaly DetectionFederated Learning | —Unverified | 0 |
| Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization | Jan 28, 2021 | Federated LearningModel Poisoning | —Unverified | 0 |
| Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation | Jan 24, 2021 | Federated LearningModel Poisoning | —Unverified | 0 |