SOTAVerified

A Minimum Description Length Approach to Regularization in Neural Networks

2025-05-19Code Available0· sign in to hype

Matan Abudy, Orr Well, Emmanuel Chemla, Roni Katzir, Nur Lan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

State-of-the-art neural networks can be trained to become remarkable solutions to many problems. But while these architectures can express symbolic, perfect solutions, trained models often arrive at approximations instead. We show that the choice of regularization method plays a crucial role: when trained on formal languages with standard regularization (L_1, L_2, or none), expressive architectures not only fail to converge to correct solutions but are actively pushed away from perfect initializations. In contrast, applying the Minimum Description Length (MDL) principle to balance model complexity with data fit provides a theoretically grounded regularization method. Using MDL, perfect solutions are selected over approximations, independently of the optimization algorithm. We propose that unlike existing regularization techniques, MDL introduces the appropriate inductive bias to effectively counteract overfitting and promote generalization.

Tasks

Reproductions