SOTAVerified

Model Poisoning

Papers

Showing 51100 of 108 papers

TitleStatusHype
FedRDF: A Robust and Dynamic Aggregation Function against Poisoning Attacks in Federated Learning0
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey0
Data-Agnostic Model Poisoning against Federated Learning: A Graph Autoencoder Approach0
Identifying the Truth of Global Model: A Generic Solution to Defend Against Byzantine and Backdoor Attacks in Federated Learning (full version)0
Can We Trust the Similarity Measurement in Federated Learning?0
Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification0
DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks0
SureFED: Robust Federated Learning via Uncertainty-Aware Inward and Outward Inspection0
An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems0
A First Order Meta Stackelberg Method for Robust Federated Learning0
Anticipatory Thinking Challenges in Open Worlds: Risk Management0
Mitigating Evasion Attacks in Federated Learning-Based Signal Classifiers0
Manipulating Visually-aware Federated Recommender Systems and Its Countermeasures0
A Data-Driven Defense against Edge-case Model Poisoning Attacks on Federated Learning0
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning0
Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
CADeSH: Collaborative Anomaly Detection for Smart Homes0
Poster: Sponge ML Model Attacks of Mobile Apps0
WW-FL: Secure and Private Large-Scale Federated Learning0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers?0
FedCC: Robust Federated Learning against Model Poisoning Attacks0
Security Analysis of SplitFed Learning0
SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks0
Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks0
A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks0
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated LearningCode0
FedPerm: Private and Robust Federated Learning by Parameter Permutation0
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications0
Performance Weighting for Robust Federated Learning Against Corrupted Sources0
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling0
Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy0
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error AnalysisCode0
Latency Optimization for Blockchain-Empowered Federated Learning in Multi-Server Edge Computing0
MPAF: Model Poisoning Attacks to Federated Learning based on Fake ClientsCode0
Studying the Robustness of Anti-adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors0
Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of RobustnessCode0
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with SparsificationCode0
FedRAD: Federated Robust Adaptive Distillation0
PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy0
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion0
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks0
On the Security Risks of AutoMLCode0
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples0
Turning Federated Learning Systems Into Covert Channels0
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning0
SAFELearning: Enable Backdoor Detectability In Federated Learning With Secure Aggregation0
Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization0
Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.