SOTAVerified

Model Poisoning

Papers

Showing 125 of 108 papers

TitleStatusHype
RepuNet: A Reputation System for Mitigating Malicious Clients in DFL0
Federated Learning-Based Data Collaboration Method for Enhancing Edge Cloud AI System Security Using Large Language Models0
Trojan Horse Hunt in Time Series Forecasting for Space Operations0
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach0
GRANITE : a Byzantine-Resilient Dynamic Gossip Learning Framework0
A Client-level Assessment of Collaborative Backdoor Poisoning in Non-IID Federated Learning0
Two Heads Are Better than One: Model-Weight and Latent-Space Analysis for Federated Learning on Non-iid Data against Poisoning Attacks0
Not All Edges are Equally Robust: Evaluating the Robustness of Ranking-Based Federated Learning0
Poisoning Bayesian Inference via Data Deletion and Replication0
SLVR: Securely Leveraging Client Validation for Robust Federated Learning0
Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated LearningCode0
DMPA: Model Poisoning Attacks on Decentralized Federated Learning for Model Differences0
SoK: Benchmarking Poisoning Attacks and Defenses in Federated LearningCode2
Maximizing Uncertainty for Federated learning via Bayesian Optimisation-based Model Poisoning0
VerifBFL: Leveraging zk-SNARKs for A Verifiable Blockchained Federated Learning0
Tazza: Shuffling Neural Network Parameters for Secure and Private Federated Learning0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
How to Defend Against Large-scale Model Poisoning Attacks in Federated Learning: A Vertical Solution0
FedSECA: Sign Election and Coordinate-wise Aggregation of Gradients for Byzantine Tolerant Federated LearningCode0
Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks0
PFAttack: Stealthy Attack Bypassing Group Fairness in Federated Learning0
pFedGame -- Decentralized Federated Learning using Game Theory in Dynamic Topology0
EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated LearningCode0
HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning0
On the Hardness of Decentralized Multi-Agent Policy Evaluation under Byzantine Attacks0
Show:102550
← PrevPage 1 of 5Next →

No leaderboard results yet.