Learning to Evolve for Optimization via Stability-Inducing Neural Unrolling
Jiaxin Gao, Yaohua Liu, Ran Cheng, Kay Chen Tan
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Evolutionary algorithms serve as a powerful paradigm for tackling optimization challenges, yet their reliance on manually engineered heuristics inherently limits their adaptability across diverse landscapes. However, the transition from the hand-crafted heuristics to data-driven algorithms faces a fundamental dilemma: achieving neural plasticity without sacrificing algorithmic stability. Although learned optimizers offer high adaptivity, their unconstrained update rules often result in unstable dynamics and brittle generalization on unseen landscapes. To address this challenge, this paper proposes Learning to Evolve (L2E), a bilevel meta-optimization framework that learns evolutionary search via stability-inducing neural unrolling. First, L2E reformulates population evolution as an unrolled fixed-point iteration via a structured neural operator. In this design, the inner loop imposes a stability-biased update structure, while the outer loop meta-trains the operator to produce effective search trajectories across tasks. Second, to balance global exploration with local refinement, a gradient-derived composite solver adaptively fuses learned evolutionary proposals with proxy numerical guidance in a differentiable manner. Extensive experiments on synthetic benchmarks and real-world control tasks demonstrate that L2E achieves substantial optimization performance, scales to high-dimensional problems, and exhibits robust zero-shot transfer across diverse test distributions.