SOTAVerified

Model Poisoning

Papers

Showing 150 of 108 papers

TitleStatusHype
RepuNet: A Reputation System for Mitigating Malicious Clients in DFL0
Federated Learning-Based Data Collaboration Method for Enhancing Edge Cloud AI System Security Using Large Language Models0
Trojan Horse Hunt in Time Series Forecasting for Space Operations0
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach0
GRANITE : a Byzantine-Resilient Dynamic Gossip Learning Framework0
A Client-level Assessment of Collaborative Backdoor Poisoning in Non-IID Federated Learning0
Two Heads Are Better than One: Model-Weight and Latent-Space Analysis for Federated Learning on Non-iid Data against Poisoning Attacks0
Not All Edges are Equally Robust: Evaluating the Robustness of Ranking-Based Federated Learning0
Poisoning Bayesian Inference via Data Deletion and Replication0
SLVR: Securely Leveraging Client Validation for Robust Federated Learning0
Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated LearningCode0
DMPA: Model Poisoning Attacks on Decentralized Federated Learning for Model Differences0
SoK: Benchmarking Poisoning Attacks and Defenses in Federated LearningCode2
Maximizing Uncertainty for Federated learning via Bayesian Optimisation-based Model Poisoning0
VerifBFL: Leveraging zk-SNARKs for A Verifiable Blockchained Federated Learning0
Tazza: Shuffling Neural Network Parameters for Secure and Private Federated Learning0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
How to Defend Against Large-scale Model Poisoning Attacks in Federated Learning: A Vertical Solution0
FedSECA: Sign Election and Coordinate-wise Aggregation of Gradients for Byzantine Tolerant Federated LearningCode0
Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks0
PFAttack: Stealthy Attack Bypassing Group Fairness in Federated Learning0
pFedGame -- Decentralized Federated Learning using Game Theory in Dynamic Topology0
EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated LearningCode0
HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning0
On the Hardness of Decentralized Multi-Agent Policy Evaluation under Byzantine Attacks0
Multi-Model based Federated Learning Against Model Poisoning Attack: A Deep Learning Based Model Selection for MEC Systems0
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense0
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning0
Defending Against Sophisticated Poisoning Attacks with RL-based Aggregation in Federated LearningCode0
No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning0
A Novel Defense Against Poisoning Attacks on Federated Learning: LayerCAM Augmented with AutoencoderCode0
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Leverage Variational Graph Representation For Model Poisoning on Federated LearningCode0
Poisoning Decentralized Collaborative Recommender System and Its Countermeasures0
Robust Federated Contrastive Recommender System against Model Poisoning Attack0
Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing0
FedRDF: A Robust and Dynamic Aggregation Function against Poisoning Attacks in Federated Learning0
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey0
Data-Agnostic Model Poisoning against Federated Learning: A Graph Autoencoder Approach0
Identifying the Truth of Global Model: A Generic Solution to Defend Against Byzantine and Backdoor Attacks in Federated Learning (full version)0
Can We Trust the Similarity Measurement in Federated Learning?0
Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification0
DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks0
SureFED: Robust Federated Learning via Uncertainty-Aware Inward and Outward Inspection0
FedDefender: Client-Side Attack-Tolerant Federated LearningCode1
An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems0
A First Order Meta Stackelberg Method for Robust Federated Learning0
Anticipatory Thinking Challenges in Open Worlds: Risk Management0
Mitigating Evasion Attacks in Federated Learning-Based Signal Classifiers0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.