SOTAVerified

Gradient boosting for convex cone predict and optimize problems

2022-04-14Code Available0· sign in to hype

Andrew Butler, Roy H. Kwon

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Prediction models are typically optimized independently from decision optimization. A smart predict then optimize (SPO) framework optimizes prediction models to minimize downstream decision regret. In this paper we present dboost, the first general purpose implementation of smart gradient boosting for `predict, then optimize' problems. The framework supports convex quadratic cone programming and gradient boosting is performed by implicit differentiation of a custom fixed-point mapping. Experiments comparing with state-of-the-art SPO methods show that dboost can further reduce out-of-sample decision regret.

Tasks

Reproductions