SOTAVerified

In Search of Goodness: Large Scale Benchmarking of Goodness Functions for the Forward-Forward Algorithm

2025-11-23Code Available0· sign in to hype

Arya Shah, Vaibhav Tripathi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The Forward-Forward (FF) algorithm offers a biologically plausible alternative to backpropagation, enabling neural networks to learn through local updates. However, FF's efficacy relies heavily on the definition of "goodness", which is a scalar measure of neural activity. While current implementations predominantly utilize a simple sum-of-squares metric, it remains unclear if this default choice is optimal. To address this, we benchmarked 21 distinct goodness functions across four standard image datasets (MNIST, FashionMNIST, CIFAR-10, STL-10), evaluating classification accuracy, energy consumption, and carbon footprint. We found that certain alternative goodness functions inspired from various domains significantly outperform the standard baseline. Specifically, game\_theoretic\_local achieved 97.15\% accuracy on MNIST, softmax\_energy\_margin\_local reached 82.84\% on FashionMNIST, and triplet\_margin\_local attained 37.69\% on STL-10. Furthermore, we observed substantial variability in computational efficiency, highlighting a critical trade-off between predictive performance and environmental cost. These findings demonstrate that the goodness function is a pivotal hyperparameter in FF design. We release our code on Github for reference and reproducibility.

Reproductions