SOTAVerified

Federated Learning

Federated Learning is a machine learning approach that allows multiple devices or entities to collaboratively train a shared model without exchanging their data with each other. Instead of sending data to a central server for training, the model is trained locally on each device, and only the model updates are sent to the central server, where they are aggregated to improve the shared model.

This approach allows for privacy-preserving machine learning, as each device keeps its data locally and only shares the information needed to improve the model.

Papers

Showing 101150 of 6771 papers

TitleStatusHype
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order OptimizationCode1
A Blockchain-based Decentralized Federated Learning Framework with Committee ConsensusCode1
An Efficient and Reliable Asynchronous Federated Learning Scheme for Smart Public TransportationCode1
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated LearningCode1
Anomaly-Flow: A Multi-domain Federated Generative Adversarial Network for Distributed Denial-of-Service DetectionCode1
Communication-Efficient Heterogeneous Federated Learning with Generalized Heavy-Ball MomentumCode1
An Efficient Approach for Cross-Silo Federated Learning to RankCode1
APPFL: Open-Source Software Framework for Privacy-Preserving Federated LearningCode1
An Efficient Framework for Clustered Federated LearningCode1
APPFLx: Providing Privacy-Preserving Cross-Silo Federated Learning as a ServiceCode1
Accumulative Poisoning Attacks on Real-time DataCode1
A Tree-based Model Averaging Approach for Personalized Treatment Effect Estimation from Heterogeneous Data SourcesCode1
ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret SharingCode1
CRFL: Certifiably Robust Federated Learning against Backdoor AttacksCode1
Exploring Federated Unlearning: Review, Comparison, and InsightsCode1
Cross-Silo Prototypical Calibration for Federated Learning with Non-IID DataCode1
A Survey for Federated Learning Evaluations: Goals and MeasuresCode1
DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge DevicesCode1
Analyzing Federated Learning through an Adversarial LensCode1
Data Valuation and Detections in Federated LearningCode1
Decentralized Federated Learning: Balancing Communication and Computing CostsCode1
A Survey on Vulnerability of Federated Learning: A Learning Algorithm PerspectiveCode1
Async-HFL: Efficient and Robust Asynchronous Federated Learning in Hierarchical IoT NetworksCode1
Asynchronous Federated Continual LearningCode1
Collaborative Fairness in Federated LearningCode1
Asynchronous Federated Learning for Edge-assisted Vehicular NetworksCode1
ACCO: Accumulate While You Communicate for Communication-Overlapped Sharded LLM TrainingCode1
CoCoFL: Communication- and Computation-Aware Federated Learning via Partial NN Freezing and QuantizationCode1
Acceleration of Federated Learning with Alleviated Forgetting in Local TrainingCode1
Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated LearningCode1
CoDeC: Communication-Efficient Decentralized Continual LearningCode1
Combating Exacerbated Heterogeneity for Robust Models in Federated LearningCode1
A Hybrid Self-Supervised Learning Framework for Vertical Federated LearningCode1
Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept DriftCode1
Client-Level Differential Privacy via Adaptive Intermediary in Federated Medical ImagingCode1
Enhancing Efficiency in Multidevice Federated Learning through Data SelectionCode1
Agnostic Federated LearningCode1
Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated LearningCode1
Clients Collaborate: Flexible Differentially Private Federated Learning with Guaranteed Improvement of Utility-Privacy Trade-offCode1
Catastrophic Data Leakage in Vertical Federated LearningCode1
CAR-MFL: Cross-Modal Augmentation by Retrieval for Multimodal Federated Learning with Missing ModalitiesCode1
CEFHRI: A Communication Efficient Federated Learning Framework for Recognizing Industrial Human-Robot InteractionCode1
Can Textual Gradient Work in Federated Learning?Code1
Omnidirectional Transfer for Quasilinear Lifelong LearningCode1
CaPC Learning: Confidential and Private Collaborative LearningCode1
CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian SamplingCode1
CLIP-guided Federated Learning on Heterogeneous and Long-Tailed DataCode1
Communication-Efficient Adaptive Federated LearningCode1
C2A: Client-Customized Adaptation for Parameter-Efficient Federated LearningCode1
ByzFL: Research Framework for Robust Federated LearningCode1
Show:102550
← PrevPage 3 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SiloBN + ASAMmIoU49.75Unverified
2SiloBN + SAMmIoU49.1Unverified
3SiloBNmIoU45.96Unverified
4FedSAM + SWAmIoU43.42Unverified
5FedASAM + SWAmIoU43.02Unverified
6FedAvg + SWAmIoU42.48Unverified
7FedASAMmIoU42.27Unverified
8FedSAMmIoU41.22Unverified
9FedAvgmIoU38.65Unverified
#ModelMetricClaimedVerifiedStatus
1FedASAM + SWAAcc@1-1262Clients68.32Unverified
2FedSAM + SWAAcc@1-1262Clients68.12Unverified
3FedAvg + SWAAcc@1-1262Clients67.52Unverified
4FedASAMAcc@1-1262Clients64.23Unverified
5FedSAMAcc@1-1262Clients63.72Unverified
6FedAvgAcc@1-1262Clients61.91Unverified
#ModelMetricClaimedVerifiedStatus
1FedASAM + SWAACC@1-100Clients42.64Unverified
2FedASAMACC@1-100Clients39.76Unverified
3FedSAM + SWAACC@1-100Clients39.51Unverified
4FedSAMACC@1-100Clients36.93Unverified
5FedAvgACC@1-100Clients36.74Unverified
#ModelMetricClaimedVerifiedStatus
1FedASAM + SWAACC@1-100Clients41.62Unverified
2FedASAMACC@1-100Clients40.81Unverified
3FedSAM + SWAACC@1-100Clients39.24Unverified
4FedAvgACC@1-100Clients38.59Unverified
5FedSAMACC@1-100Clients38.56Unverified
#ModelMetricClaimedVerifiedStatus
1FedASAM + SWAACC@1-100Clients48.72Unverified
2FedSAM + SWAACC@1-100Clients46.76Unverified
3FedASAMACC@1-100Clients46.58Unverified
4FedSAMACC@1-100Clients44.84Unverified
5FedAvgACC@1-100Clients41.27Unverified
#ModelMetricClaimedVerifiedStatus
1FedASAM + SWAACC@1-100Clients48.27Unverified
2FedASAMACC@1-100Clients47.78Unverified
3FedSAM + SWAACC@1-100Clients46.47Unverified
4FedSAMACC@1-100Clients46.05Unverified
5FedAvgACC@1-100Clients42.17Unverified
#ModelMetricClaimedVerifiedStatus
1FedASAM + SWAACC@1-100Clients49.17Unverified
2FedSAM + SWAACC@1-100Clients47.96Unverified
3FedASAMACC@1-100Clients45.61Unverified
4FedSAMACC@1-100Clients44.73Unverified
5FedAvgACC@1-100Clients40.43Unverified
#ModelMetricClaimedVerifiedStatus
1FedASAM + SWAACC@1-100Clients42.01Unverified
2FedSAM + SWAACC@1-100Clients39.3Unverified
3FedASAMACC@1-100Clients36.04Unverified
4FedSAMACC@1-100Clients31.04Unverified
5FedAvgACC@1-100Clients30.25Unverified
#ModelMetricClaimedVerifiedStatus
1FedASAMACC@1-100Clients54.97Unverified
2FedASAM + SWAACC@1-100Clients54.79Unverified
3FedSAM + SWAACC@1-100Clients53.67Unverified
4FedSAMACC@1-100Clients53.39Unverified
5FedAvgACC@1-100Clients50.25Unverified
#ModelMetricClaimedVerifiedStatus
1FedASAMACC@1-100Clients54.5Unverified
2FedSAM + SWAACC@1-100Clients54.36Unverified
3FedASAM + SWAACC@1-100Clients54.1Unverified
4FedSAMACC@1-100Clients53.97Unverified
5FedAvgACC@1-100Clients50.66Unverified
#ModelMetricClaimedVerifiedStatus
1FedASAMACC@1-100Clients54.81Unverified
2FedSAMACC@1-100Clients54.01Unverified
3FedSAM + SWAACC@1-100Clients53.9Unverified
4FedASAM + SWAACC@1-100Clients53.86Unverified
5FedAvgACC@1-100Clients49.92Unverified
#ModelMetricClaimedVerifiedStatus
1AdaBestAverage Top-1 Accuracy56.2Unverified