SOTAVerified

dynActivation: A Trainable Activation Family for Adaptive Nonlinearity

2026-03-23Unverified0· sign in to hype

Alois Bachmann

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper proposes dynActivation, a per-layer trainable activation defined as f_i(x) = BaseAct(x)(α_i - β_i) + β_i x, where α_i and β_i are lightweight learned scalars that interpolate between the base nonlinearity and a linear path and BaseAct(x) resembles any ReLU-like function. The static and dynamic ReLU-like variants are then compared across multiple vision tasks, language modeling tasks, and ablation studies. The results suggest that dynActivation variants tend to linearize deep layers while maintaining high performance, which can improve training efficiency by up to +54\% over ReLU. On CIFAR-10, dynActivation(Mish) improves over static Mish by up to +14.02\% on AttentionCNN with an average improvment by +6.00\%, with a 24\% convergence-AUC reduction relative to Mish (2120 vs. 2785). In a 1-to-75-layer MNIST depth-scaling study, dynActivation never drops below 95\% test accuracy (95.3--99.3\%), while ReLU collapses below 80\% at 25 layers. Under FGSM at =0.08, dynActivation(Mish) incurs a 55.39\% accuracy drop versus 62.79\% for ReLU (7.40\% advantage). Transferred to language modeling, a new proposed dynActGLU-variant achieves a 10.3\% relative perplexity reduction over SwiGLU at 5620 steps (4.047 vs. 4.514), though the gap vanishes at 34300 steps.

Reproductions