A fast algorithm to minimize prediction loss of the optimal solution in inverse optimization problem of MILP
Akira Kitaoka
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We consider the inverse optimization problem of estimating the weights of the objective function such that the given solution is an optimal solution for a mixed integer linear program (MILP). In this inverse optimization problem, the known methods exhibit inefficient convergence. Specifically, if d denotes the dimension of the weights and k the number of iterations, then the error of the weights is bounded by O(k^-1/(d-1)), leading to slow convergence as d increases. We propose a projected subgradient method with a step size of k^-1/2 based on suboptimality loss. We theoretically show and demonstrate that the proposed method efficiently learns the weights. In particular, we show that there exists a constant > 0 such that the distance between the learned and true weights is bounded by O(k^-1/(1+) (- k^1/22+)), or the optimal solution is exactly recovered. Furthermore, experiments demonstrate that the proposed method solves the inverse optimization problems of MILP using fewer than 1/7 the number of MILP calls required by known methods, and converges within a finite number of iterations.