SOTAVerified

Learning based convex approximation for constrained parametric optimization

2025-05-07Unverified0· sign in to hype

Kang Liu, Wei Peng, Jianchen Hu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We propose an input convex neural network (ICNN)-based self-supervised learning framework to solve continuous constrained optimization problems. By integrating the augmented Lagrangian method (ALM) with the constraint correction mechanism, our framework ensures non-strict constraint feasibility, better optimality gap, and best convergence rate with respect to the state-of-the-art learning-based methods. We provide a rigorous convergence analysis, showing that the algorithm converges to a Karush-Kuhn-Tucker (KKT) point of the original problem even when the internal solver is a neural network, and the approximation error is bounded. We test our approach on a range of benchmark tasks including quadratic programming (QP), nonconvex programming, and large-scale AC optimal power flow problems. The results demonstrate that compared to existing solvers (e.g., OSQP, IPOPT) and the latest learning-based methods (e.g., DC3, PDL), our approach achieves a superior balance among accuracy, feasibility, and computational efficiency.

Tasks

Reproductions