SOTAVerified

Max-Diversity Distributed Learning: Theory and Algorithms

2018-12-19Unverified0· sign in to hype

Yong Liu, Jian Li, Weiping Wang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study the risk performance of distributed learning for the regularization empirical risk minimization with fast convergence rate, substantially improving the error analysis of the existing divide-and-conquer based distributed learning. An interesting theoretical finding is that the larger the diversity of each local estimate is, the tighter the risk bound is. This theoretical analysis motivates us to devise an effective maxdiversity distributed learning algorithm (MDD). Experimental results show that MDD can outperform the existing divide-andconquer methods but with a bit more time. Theoretical analysis and empirical results demonstrate that our proposed MDD is sound and effective.

Tasks

Reproductions