SOTAVerified

MuonAll: Muon Variant for Efficient Finetuning of Large Language Models

2025-11-08Code Available0· sign in to hype

Saurabh Page, Advait Joshi, S. S. Sonawane

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Muon optimizer has demonstrated robust results in pretraining of language models but its performance in finetuning of existing public pretrained models is not yet explored. Currently, Muon is used along with AdamW introducing a scope of improvement for adopting all parameters inside Muon. We introduce MuonAll, which incorporates all the parameters inside Muon by transforming into 2D matrices. We conduct extensive finetuning experiments across publicly available language models with model sizes upto half billion parameters. Muon and MuonAll perform at par with AdamW across major benchmarks, highlighting their effectiveness as alternative optimizers. We open-source the distributed implementations of Muon and MuonAll, available at https://github.com/Saurabh750/optimizer

Reproductions