SOTAVerified

Model Poisoning

Papers

Showing 4150 of 108 papers

TitleStatusHype
Identifying the Truth of Global Model: A Generic Solution to Defend Against Byzantine and Backdoor Attacks in Federated Learning (full version)0
Can We Trust the Similarity Measurement in Federated Learning?0
Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification0
DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks0
SureFED: Robust Federated Learning via Uncertainty-Aware Inward and Outward Inspection0
FedDefender: Client-Side Attack-Tolerant Federated LearningCode1
An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems0
A First Order Meta Stackelberg Method for Robust Federated Learning0
Anticipatory Thinking Challenges in Open Worlds: Risk Management0
Mitigating Evasion Attacks in Federated Learning-Based Signal Classifiers0
Show:102550
← PrevPage 5 of 11Next →

No leaderboard results yet.