SOTAVerified

Model Poisoning

Papers

Showing 51100 of 108 papers

TitleStatusHype
Multi-Model based Federated Learning Against Model Poisoning Attack: A Deep Learning Based Model Selection for MEC Systems0
Not All Edges are Equally Robust: Evaluating the Robustness of Ranking-Based Federated Learning0
No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning0
On the Hardness of Decentralized Multi-Agent Policy Evaluation under Byzantine Attacks0
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning0
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach0
Performance Weighting for Robust Federated Learning Against Corrupted Sources0
PFAttack: Stealthy Attack Bypassing Group Fairness in Federated Learning0
pFedGame -- Decentralized Federated Learning using Game Theory in Dynamic Topology0
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion0
Poisoning Bayesian Inference via Data Deletion and Replication0
Poisoning Decentralized Collaborative Recommender System and Its Countermeasures0
Poster: Sponge ML Model Attacks of Mobile Apps0
PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy0
RepuNet: A Reputation System for Mitigating Malicious Clients in DFL0
Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks0
Robust Federated Contrastive Recommender System against Model Poisoning Attack0
SureFED: Robust Federated Learning via Uncertainty-Aware Inward and Outward Inspection0
SAFELearning: Enable Backdoor Detectability In Federated Learning With Secure Aggregation0
Security Analysis of SplitFed Learning0
SLVR: Securely Leveraging Client Validation for Robust Federated Learning0
SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks0
Studying the Robustness of Anti-adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors0
Tazza: Shuffling Neural Network Parameters for Secure and Private Federated Learning0
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks0
Trojan Horse Hunt in Time Series Forecasting for Space Operations0
Two Heads Are Better than One: Model-Weight and Latent-Space Analysis for Federated Learning on Non-iid Data against Poisoning Attacks0
Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation0
VerifBFL: Leveraging zk-SNARKs for A Verifiable Blockchained Federated Learning0
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense0
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion0
2CP: Decentralized Protocols to Transparently Evaluate Contributivity in Blockchain Federated Learning Environments0
Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection0
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning0
A Client-level Assessment of Collaborative Backdoor Poisoning in Non-IID Federated Learning0
A First Order Meta Stackelberg Method for Robust Federated Learning0
Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing0
An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems0
Anticipatory Thinking Challenges in Open Worlds: Risk Management0
A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks0
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples0
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling0
BaFFLe: Backdoor detection via Feedback-based Federated Learning0
CADeSH: Collaborative Anomaly Detection for Smart Homes0
Can We Trust the Similarity Measurement in Federated Learning?0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning0
Turning Federated Learning Systems Into Covert Channels0
Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization0
Data-Agnostic Model Poisoning against Federated Learning: A Graph Autoencoder Approach0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.