SOTAVerified

Model Poisoning

Papers

Showing 5175 of 108 papers

TitleStatusHype
Manipulating Visually-aware Federated Recommender Systems and Its Countermeasures0
A Data-Driven Defense against Edge-case Model Poisoning Attacks on Federated Learning0
Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated LearningCode1
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning0
Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
CADeSH: Collaborative Anomaly Detection for Smart Homes0
Poster: Sponge ML Model Attacks of Mobile Apps0
WW-FL: Secure and Private Large-Scale Federated Learning0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers?0
FedCC: Robust Federated Learning against Model Poisoning Attacks0
Security Analysis of SplitFed Learning0
SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks0
Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks0
A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks0
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated LearningCode0
FedPerm: Private and Robust Federated Learning by Parameter Permutation0
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious ClientsCode1
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications0
Performance Weighting for Robust Federated Learning Against Corrupted Sources0
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling0
Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy0
FedRecAttack: Model Poisoning Attack to Federated RecommendationCode1
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error AnalysisCode0
Show:102550
← PrevPage 3 of 5Next →

No leaderboard results yet.