SOTAVerified

Model Poisoning

Papers

Showing 150 of 108 papers

TitleStatusHype
SoK: Benchmarking Poisoning Attacks and Defenses in Federated LearningCode2
Byzantine-robust Federated Learning through Collaborative Malicious Gradient FilteringCode1
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client PerspectiveCode1
Ditto: Fair and Robust Federated Learning Through PersonalizationCode1
Robust Federated Learning with Attack-Adaptive AggregationCode1
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
Analyzing Federated Learning through an Adversarial LensCode1
FedDefender: Client-Side Attack-Tolerant Federated LearningCode1
Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated LearningCode1
FedRecAttack: Model Poisoning Attack to Federated RecommendationCode1
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious ClientsCode1
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
How To Backdoor Federated LearningCode1
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated LearningCode1
Defending Against Sophisticated Poisoning Attacks with RL-based Aggregation in Federated LearningCode0
On the Security Risks of AutoMLCode0
EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated LearningCode0
FedSECA: Sign Election and Coordinate-wise Aggregation of Gradients for Byzantine Tolerant Federated LearningCode0
Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of RobustnessCode0
A Novel Defense Against Poisoning Attacks on Federated Learning: LayerCAM Augmented with AutoencoderCode0
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with SparsificationCode0
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error AnalysisCode0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
MPAF: Model Poisoning Attacks to Federated Learning based on Fake ClientsCode0
Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated LearningCode0
Leverage Variational Graph Representation For Model Poisoning on Federated LearningCode0
Mitigating Sybils in Federated Learning PoisoningCode0
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated LearningCode0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications0
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning0
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling0
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey0
Data-Agnostic Model Poisoning against Federated Learning: A Graph Autoencoder Approach0
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples0
Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization0
Turning Federated Learning Systems Into Covert Channels0
A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks0
A First Order Meta Stackelberg Method for Robust Federated Learning0
Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
FedRAD: Federated Robust Adaptive Distillation0
FedPerm: Private and Robust Federated Learning by Parameter Permutation0
Anticipatory Thinking Challenges in Open Worlds: Risk Management0
FedRDF: A Robust and Dynamic Aggregation Function against Poisoning Attacks in Federated Learning0
Federated Learning-Based Data Collaboration Method for Enhancing Edge Cloud AI System Security Using Large Language Models0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Identifying the Truth of Global Model: A Generic Solution to Defend Against Byzantine and Backdoor Attacks in Federated Learning (full version)0
Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy0
Can We Trust the Similarity Measurement in Federated Learning?0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.