SOTAVerified

Scalify: scale propagation for efficient low-precision LLM training

2024-07-24Code Available1· sign in to hype

Paul Balança, Sam Hosegood, Carlo Luschi, Andrew Fitzgibbon

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Low-precision formats such as float8 have been introduced in machine learning accelerated hardware to improve computational efficiency for large language models training and inference. Nevertheless, adoption by the ML community has been slowed down by the complex, and sometimes brittle, techniques required to match higher precision training accuracy. In this work, we present Scalify, a end-to-end scale propagation paradigm for computational graphs, generalizing and formalizing existing tensor scaling methods. Experiment results show that Scalify supports out-of-the-box float8 matrix multiplication and gradients representation, as well as float16 optimizer state storage. Our JAX implementation of Scalify is open-sourced at https://github.com/graphcore-research/jax-scalify

Tasks

Reproductions