SOTAVerified

Model Poisoning

Papers

Showing 4150 of 108 papers

TitleStatusHype
Federated Learning-Based Data Collaboration Method for Enhancing Edge Cloud AI System Security Using Large Language Models0
Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing0
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning0
FedRAD: Federated Robust Adaptive Distillation0
FedRDF: A Robust and Dynamic Aggregation Function against Poisoning Attacks in Federated Learning0
DMPA: Model Poisoning Attacks on Decentralized Federated Learning for Model Differences0
DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks0
Identifying the Truth of Global Model: A Generic Solution to Defend Against Byzantine and Backdoor Attacks in Federated Learning (full version)0
BaFFLe: Backdoor detection via Feedback-based Federated Learning0
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense0
Show:102550
← PrevPage 5 of 11Next →

No leaderboard results yet.