SOTAVerified

Model Poisoning

Papers

Showing 150 of 108 papers

TitleStatusHype
SoK: Benchmarking Poisoning Attacks and Defenses in Federated LearningCode2
FedDefender: Client-Side Attack-Tolerant Federated LearningCode1
Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated LearningCode1
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious ClientsCode1
FedRecAttack: Model Poisoning Attack to Federated RecommendationCode1
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client PerspectiveCode1
Byzantine-robust Federated Learning through Collaborative Malicious Gradient FilteringCode1
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated LearningCode1
Robust Federated Learning with Attack-Adaptive AggregationCode1
Ditto: Fair and Robust Federated Learning Through PersonalizationCode1
Analyzing Federated Learning through an Adversarial LensCode1
How To Backdoor Federated LearningCode1
RepuNet: A Reputation System for Mitigating Malicious Clients in DFL0
Federated Learning-Based Data Collaboration Method for Enhancing Edge Cloud AI System Security Using Large Language Models0
Trojan Horse Hunt in Time Series Forecasting for Space Operations0
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach0
GRANITE : a Byzantine-Resilient Dynamic Gossip Learning Framework0
A Client-level Assessment of Collaborative Backdoor Poisoning in Non-IID Federated Learning0
Two Heads Are Better than One: Model-Weight and Latent-Space Analysis for Federated Learning on Non-iid Data against Poisoning Attacks0
Not All Edges are Equally Robust: Evaluating the Robustness of Ranking-Based Federated Learning0
Poisoning Bayesian Inference via Data Deletion and Replication0
SLVR: Securely Leveraging Client Validation for Robust Federated Learning0
Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated LearningCode0
DMPA: Model Poisoning Attacks on Decentralized Federated Learning for Model Differences0
Maximizing Uncertainty for Federated learning via Bayesian Optimisation-based Model Poisoning0
VerifBFL: Leveraging zk-SNARKs for A Verifiable Blockchained Federated Learning0
Tazza: Shuffling Neural Network Parameters for Secure and Private Federated Learning0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
How to Defend Against Large-scale Model Poisoning Attacks in Federated Learning: A Vertical Solution0
FedSECA: Sign Election and Coordinate-wise Aggregation of Gradients for Byzantine Tolerant Federated LearningCode0
Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks0
PFAttack: Stealthy Attack Bypassing Group Fairness in Federated Learning0
pFedGame -- Decentralized Federated Learning using Game Theory in Dynamic Topology0
EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated LearningCode0
HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning0
On the Hardness of Decentralized Multi-Agent Policy Evaluation under Byzantine Attacks0
Multi-Model based Federated Learning Against Model Poisoning Attack: A Deep Learning Based Model Selection for MEC Systems0
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense0
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning0
Defending Against Sophisticated Poisoning Attacks with RL-based Aggregation in Federated LearningCode0
No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning0
A Novel Defense Against Poisoning Attacks on Federated Learning: LayerCAM Augmented with AutoencoderCode0
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Leverage Variational Graph Representation For Model Poisoning on Federated LearningCode0
Poisoning Decentralized Collaborative Recommender System and Its Countermeasures0
Robust Federated Contrastive Recommender System against Model Poisoning Attack0
Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.