SOTAVerified

Model Poisoning

Papers

Showing 2650 of 108 papers

TitleStatusHype
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey0
FedPerm: Private and Robust Federated Learning by Parameter Permutation0
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning0
Anticipatory Thinking Challenges in Open Worlds: Risk Management0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Can We Trust the Similarity Measurement in Federated Learning?0
A Client-level Assessment of Collaborative Backdoor Poisoning in Non-IID Federated Learning0
2CP: Decentralized Protocols to Transparently Evaluate Contributivity in Blockchain Federated Learning Environments0
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning0
FedCC: Robust Federated Learning against Model Poisoning Attacks0
CADeSH: Collaborative Anomaly Detection for Smart Homes0
Exact Support Recovery in Federated Regression with One-shot Communication0
An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems0
Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy0
Federated Learning-Based Data Collaboration Method for Enhancing Edge Cloud AI System Security Using Large Language Models0
Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing0
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning0
FedRAD: Federated Robust Adaptive Distillation0
FedRDF: A Robust and Dynamic Aggregation Function against Poisoning Attacks in Federated Learning0
DMPA: Model Poisoning Attacks on Decentralized Federated Learning for Model Differences0
DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks0
Identifying the Truth of Global Model: A Generic Solution to Defend Against Byzantine and Backdoor Attacks in Federated Learning (full version)0
BaFFLe: Backdoor detection via Feedback-based Federated Learning0
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense0
Show:102550
← PrevPage 2 of 5Next →

No leaderboard results yet.