Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
Rajarshi Saha, Mert Pilanci, Andrea J. Goldsmith
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
High-dimensional models often have a large memory footprint and must be quantized after training before being deployed on resource-constrained edge devices for inference tasks. In this work, we develop an information-theoretic framework for the problem of quantizing a linear regressor learned from training data (X, y), for some underlying statistical relationship y = X + v. The learned model, which is an estimate of the latent parameter R^d, is constrained to be representable using only Bd bits, where B (0, ) is a pre-specified budget and d is the dimension. We derive an information-theoretic lower bound for the minimax risk under this setting and propose a matching upper bound using randomized embedding-based algorithms which is tight up to constant factors. The lower and upper bounds together characterize the minimum threshold bit-budget required to achieve a performance risk comparable to the unquantized setting. We also propose randomized Hadamard embeddings that are computationally efficient and are optimal up to a mild logarithmic factor of the lower bound. Our model quantization strategy can be generalized and we show its efficacy by extending the method and upper-bounds to two-layer ReLU neural networks for non-linear regression. Numerical simulations show the improved performance of our proposed scheme as well as its closeness to the lower bound.