SOTAVerified

Joint Quantization and Pruning Neural Networks Approach: A Case Study on FSO Receivers

2025-06-25Unverified0· sign in to hype

Mohanad Obeed, Ming Jian

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Towards fast, hardware-efficient, and low-complexity receivers, we propose a compression-aware learning approach and examine it on free-space optical (FSO) receivers for turbulence mitigation. The learning approach jointly quantize, prune, and train a convolutional neural network (CNN). In addition, we propose to have the CNN weights of power of two values so we replace the multiplication operations bit-shifting operations in every layer that has significant lower computational cost. The compression idea in the proposed approach is that the loss function is updated and both the quantization levels and the pruning limits are optimized in every epoch of training. The compressed CNN is examined for two levels of compression (1-bit and 2-bits) over different FSO systems. The numerical results show that the compression approach provides negligible decrease in performance in case of 1-bit quantization and the same performance in case of 2-bits quantization, compared to the full-precision CNNs. In general, the proposed IM/DD FSO receivers show better bit-error rate (BER) performance (without the need for channel state information (CSI)) compared to the maximum likelihood (ML) receivers that utilize imperfect CSI when the DL model is compressed whether with 1-bit or 2-bit quantization.

Tasks

Reproductions