Regret-Optimal Federated Transfer Learning for Kernel Regression with Applications in American Option Pricing
Xuwei Yang, Anastasis Kratsios, Florian Krach, Matheus Grasselli, Aurelien Lucchi
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/floriankrach/regretoptimalfederatedtransferlearningOfficialIn paperpytorch★ 7
Abstract
We propose an optimal iterative scheme for federated transfer learning, where a central planner has access to datasets D_1,, D_N for the same learning model f_. Our objective is to minimize the cumulative deviation of the generated parameters \_i(t)\_t=0^T across all T iterations from the specialized parameters ^_1,,^_N obtained for each dataset, while respecting the loss function for the model f_(T) produced by the algorithm upon halting. We only allow for continual communication between each of the specialized models (nodes/agents) and the central planner (server), at each iteration (round). For the case where the model f_ is a finite-rank kernel regression, we derive explicit updates for the regret-optimal algorithm. By leveraging symmetries within the regret-optimal algorithm, we further develop a nearly regret-optimal heuristic that runs with O(Np^2) fewer elementary operations, where p is the dimension of the parameter space. Additionally, we investigate the adversarial robustness of the regret-optimal algorithm showing that an adversary which perturbs q training pairs by at-most >0, across all training sets, cannot reduce the regret-optimal algorithm's regret by more than O( q N^1/2), where N is the aggregate number of training pairs. To validate our theoretical findings, we conduct numerical experiments in the context of American option pricing, utilizing a randomly generated finite-rank kernel.