SOTAVerified

On-device AI: Quantization-aware Training of Transformers in Time-Series

2024-08-29Unverified0· sign in to hype

Tianheng Ling, Gregor Schiele

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Artificial Intelligence (AI) models for time-series in pervasive computing keep getting larger and more complicated. The Transformer model is by far the most compelling of these AI models. However, it is difficult to obtain the desired performance when deploying such a massive model on a sensor device with limited resources. My research focuses on optimizing the Transformer model for time-series forecasting tasks. The optimized model will be deployed as hardware accelerators on embedded Field Programmable Gate Arrays (FPGAs). I will investigate the impact of applying Quantization-aware Training to the Transformer model to reduce its size and runtime memory footprint while maximizing the advantages of FPGAs.

Tasks

Reproductions