Automated Heterogeneous Low-Bit Quantization of Multi-Model Deep Learning Inference Pipeline
2023-11-10Unverified0· sign in to hype
Jayeeta Mondal, Swarnava Dey, Arijit Mukherjee
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Multiple Deep Neural Networks (DNNs) integrated into single Deep Learning (DL) inference pipelines e.g. Multi-Task Learning (MTL) or Ensemble Learning (EL), etc., albeit very accurate, pose challenges for edge deployment. In these systems, models vary in their quantization tolerance and resource demands, requiring meticulous tuning for accuracy-latency balance. This paper introduces an automated heterogeneous quantization approach for DL inference pipelines with multiple DNNs.