| Anticipatory Thinking Challenges in Open Worlds: Risk Management | Jun 22, 2023 | Adversarial RobustnessAutonomous Vehicles | —Unverified | 0 |
| Mitigating Evasion Attacks in Federated Learning-Based Signal Classifiers | Jun 8, 2023 | Adversarial AttackFederated Learning | —Unverified | 0 |
| Manipulating Visually-aware Federated Recommender Systems and Its Countermeasures | May 14, 2023 | Collaborative FilteringModel Poisoning | —Unverified | 0 |
| A Data-Driven Defense against Edge-case Model Poisoning Attacks on Federated Learning | May 3, 2023 | Federated LearningModel Poisoning | —Unverified | 0 |
| Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning | Apr 21, 2023 | Federated LearningModel Poisoning | —Unverified | 0 |
| Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection | Mar 29, 2023 | Anomaly DetectionFederated Learning | —Unverified | 0 |
| Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks | Mar 7, 2023 | Data PoisoningModel Poisoning | CodeCode Available | 0 |
| CADeSH: Collaborative Anomaly Detection for Smart Homes | Mar 2, 2023 | Anomaly DetectionIntrusion Detection | —Unverified | 0 |
| Poster: Sponge ML Model Attacks of Mobile Apps | Mar 1, 2023 | AttributeFederated Learning | —Unverified | 0 |
| WW-FL: Secure and Private Large-Scale Federated Learning | Feb 20, 2023 | Data PoisoningFederated Learning | —Unverified | 0 |