SOTAVerified

Model Poisoning

Papers

Showing 51100 of 108 papers

TitleStatusHype
SureFED: Robust Federated Learning via Uncertainty-Aware Inward and Outward Inspection0
SAFELearning: Enable Backdoor Detectability In Federated Learning With Secure Aggregation0
Security Analysis of SplitFed Learning0
SLVR: Securely Leveraging Client Validation for Robust Federated Learning0
SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks0
Studying the Robustness of Anti-adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors0
Tazza: Shuffling Neural Network Parameters for Secure and Private Federated Learning0
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks0
Trojan Horse Hunt in Time Series Forecasting for Space Operations0
Two Heads Are Better than One: Model-Weight and Latent-Space Analysis for Federated Learning on Non-iid Data against Poisoning Attacks0
Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation0
VerifBFL: Leveraging zk-SNARKs for A Verifiable Blockchained Federated Learning0
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense0
GRANITE : a Byzantine-Resilient Dynamic Gossip Learning Framework0
How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers?0
How to Defend Against Large-scale Model Poisoning Attacks in Federated Learning: A Vertical Solution0
HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning0
WW-FL: Secure and Private Large-Scale Federated Learning0
Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification0
Latency Optimization for Blockchain-Empowered Federated Learning in Multi-Server Edge Computing0
A Data-Driven Defense against Edge-case Model Poisoning Attacks on Federated Learning0
Learning to Detect Malicious Clients for Robust Federated Learning0
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning0
Manipulating Visually-aware Federated Recommender Systems and Its Countermeasures0
Maximizing Uncertainty for Federated learning via Bayesian Optimisation-based Model Poisoning0
Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks0
Mitigating Evasion Attacks in Federated Learning-Based Signal Classifiers0
Mixed Strategy Game Model Against Data Poisoning Attacks0
Multi-Model based Federated Learning Against Model Poisoning Attack: A Deep Learning Based Model Selection for MEC Systems0
Not All Edges are Equally Robust: Evaluating the Robustness of Ranking-Based Federated Learning0
No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning0
On the Hardness of Decentralized Multi-Agent Policy Evaluation under Byzantine Attacks0
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning0
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach0
Performance Weighting for Robust Federated Learning Against Corrupted Sources0
PFAttack: Stealthy Attack Bypassing Group Fairness in Federated Learning0
pFedGame -- Decentralized Federated Learning using Game Theory in Dynamic Topology0
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion0
Poisoning Bayesian Inference via Data Deletion and Replication0
Poisoning Decentralized Collaborative Recommender System and Its Countermeasures0
Poster: Sponge ML Model Attacks of Mobile Apps0
PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy0
RepuNet: A Reputation System for Mitigating Malicious Clients in DFL0
Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks0
FedSECA: Sign Election and Coordinate-wise Aggregation of Gradients for Byzantine Tolerant Federated LearningCode0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated LearningCode0
On the Security Risks of AutoMLCode0
EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated LearningCode0
A Novel Defense Against Poisoning Attacks on Federated Learning: LayerCAM Augmented with AutoencoderCode0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.