Black-box Optimizer with Implicit Natural Gradient
Yueming Lyu, Ivor W. Tsang
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Black-box optimization is primarily important for many compute-intensive applications, including reinforcement learning (RL), robot control, etc. This paper presents a novel theoretical framework for black-box optimization, in which our method performs stochastic update with the implicit natural gradient of an exponential-family distribution. Theoretically, we prove the convergence rate of our framework with full matrix update for convex functions. Our theoretical results also hold for continuous non-differentiable black-box functions. Our methods are very simple and contain less hyper-parameters than CMA-ES hansen2006cma. Empirically, our method with full matrix update achieves competitive performance compared with one of the state-of-the-art method CMA-ES on benchmark test problems. Moreover, our methods can achieve high optimization precision on some challenging test functions (e.g., l_1-norm ellipsoid test problem and Levy test problem), while methods with explicit natural gradient, i.e., IGO ollivier2017information with full matrix update can not. This shows the efficiency of our methods.