SOTAVerified

Model Compression

2021-05-20Unverified0· sign in to hype

Arhum Ishtiaq, Sara Mahmood, Maheen Anees, Neha Mumtaz

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

With time, machine learning models have increased in their scope, functionality and size. Consequently, the increased functionality and size of such models requires high-end hardware to both train and provide inference after the fact. This paper aims to explore the possibilities within the domain of model compression, discuss the efficiency of combining various levels of pruning and quantization, while proposing a quality measurement metric to objectively decide which combination is best in terms of minimizing the accuracy delta and maximizing the size reduction factor.

Tasks

Reproductions