SOTAVerified

Model Poisoning

Papers

Showing 1120 of 108 papers

TitleStatusHype
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious ClientsCode1
Ditto: Fair and Robust Federated Learning Through PersonalizationCode1
Analyzing Federated Learning through an Adversarial LensCode1
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated LearningCode1
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling0
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples0
A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks0
A First Order Meta Stackelberg Method for Robust Federated Learning0
Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection0
Anticipatory Thinking Challenges in Open Worlds: Risk Management0
Show:102550
← PrevPage 2 of 11Next →

No leaderboard results yet.