SOTAVerified

Model Poisoning

Papers

Showing 150 of 108 papers

TitleStatusHype
SoK: Benchmarking Poisoning Attacks and Defenses in Federated LearningCode2
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
Robust Federated Learning with Attack-Adaptive AggregationCode1
Analyzing Federated Learning through an Adversarial LensCode1
Byzantine-robust Federated Learning through Collaborative Malicious Gradient FilteringCode1
How To Backdoor Federated LearningCode1
FedRecAttack: Model Poisoning Attack to Federated RecommendationCode1
Ditto: Fair and Robust Federated Learning Through PersonalizationCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated LearningCode1
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client PerspectiveCode1
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious ClientsCode1
FedDefender: Client-Side Attack-Tolerant Federated LearningCode1
Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated LearningCode1
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications0
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling0
Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection0
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples0
A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks0
A First Order Meta Stackelberg Method for Robust Federated Learning0
How to Defend Against Large-scale Model Poisoning Attacks in Federated Learning: A Vertical Solution0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Turning Federated Learning Systems Into Covert Channels0
Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization0
Data-Agnostic Model Poisoning against Federated Learning: A Graph Autoencoder Approach0
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey0
FedPerm: Private and Robust Federated Learning by Parameter Permutation0
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning0
Anticipatory Thinking Challenges in Open Worlds: Risk Management0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Can We Trust the Similarity Measurement in Federated Learning?0
A Client-level Assessment of Collaborative Backdoor Poisoning in Non-IID Federated Learning0
2CP: Decentralized Protocols to Transparently Evaluate Contributivity in Blockchain Federated Learning Environments0
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning0
FedCC: Robust Federated Learning against Model Poisoning Attacks0
CADeSH: Collaborative Anomaly Detection for Smart Homes0
Exact Support Recovery in Federated Regression with One-shot Communication0
An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems0
Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy0
Federated Learning-Based Data Collaboration Method for Enhancing Edge Cloud AI System Security Using Large Language Models0
Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing0
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning0
FedRAD: Federated Robust Adaptive Distillation0
FedRDF: A Robust and Dynamic Aggregation Function against Poisoning Attacks in Federated Learning0
DMPA: Model Poisoning Attacks on Decentralized Federated Learning for Model Differences0
DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks0
Identifying the Truth of Global Model: A Generic Solution to Defend Against Byzantine and Backdoor Attacks in Federated Learning (full version)0
BaFFLe: Backdoor detection via Feedback-based Federated Learning0
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.