SOTAVerified

NOVAS: Non-convex Optimization via Adaptive Stochastic Search for End-to-End Learning and Control

2020-06-22ICLR 2021Unverified0· sign in to hype

Ioannis Exarchos, Marcus A. Pereira, Ziyi Wang, Evangelos A. Theodorou

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this work we propose the use of adaptive stochastic search as a building block for general, non-convex optimization operations within deep neural network architectures. Specifically, for an objective function located at some layer in the network and parameterized by some network parameters, we employ adaptive stochastic search to perform optimization over its output. This operation is differentiable and does not obstruct the passing of gradients during backpropagation, thus enabling us to incorporate it as a component in end-to-end learning. We study the proposed optimization module's properties and benchmark it against two existing alternatives on a synthetic energy-based structured prediction task, and further showcase its use in stochastic optimal control applications.

Tasks

Reproductions