SOTAVerified

Rate distortion comparison of a few gradient quantizers

2021-08-23Unverified0· sign in to hype

Tharindu Adikari

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This article is in the context of gradient compression. Gradient compression is a popular technique for mitigating the communication bottleneck observed when training large machine learning models in a distributed manner using gradient-based methods such as stochastic gradient descent. In this article, assuming a Gaussian distribution for the components in gradient, we find the rate distortion trade-off of gradient quantization schemes such as Scaled-sign and Top-K, and compare with the Shannon rate distortion limit. A similar comparison with vector quantizers also is presented.

Tasks

Reproductions