SOTAVerified

Neural Fixed-Point Acceleration for Convex Optimization

2021-07-21ICML Workshop AutoML 2021Code Available1· sign in to hype

Shobha Venkataraman, Brandon Amos

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Fixed-point iterations are at the heart of numerical computing and are often a computational bottleneck in real-time applications that typically need a fast solution of moderate accuracy. We present neural fixed-point acceleration which combines ideas from meta-learning and classical acceleration methods to automatically learn to accelerate fixed-point problems that are drawn from a distribution. We apply our framework to SCS, the state-of-the-art solver for convex cone programming, and design models and loss functions to overcome the challenges of learning over unrolled optimization and acceleration instabilities. Our work brings neural acceleration into any optimization problem expressible with CVXPY. The source code behind this paper is available at https://github.com/facebookresearch/neural-scs

Tasks

Reproductions