SOTAVerified

Multi-task learning on the edge: cost-efficiency and theoretical optimality

2021-10-09Code Available0· sign in to hype

Sami Fakhry, Romain Couillet, Malik Tiomoko

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This article proposes a distributed multi-task learning (MTL) algorithm based on supervised principal component analysis (SPCA) which is: (i) theoretically optimal for Gaussian mixtures, (ii) computationally cheap and scalable. Supporting experiments on synthetic and real benchmark data demonstrate that significant energy gains can be obtained with no performance loss.

Tasks

Reproductions