SOTAVerified

Convergence Rate of a Functional Learning Method for Contextual Stochastic Optimization

2026-03-13Unverified0· sign in to hype

Noel Smith, Andrzej Ruszczynski

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We consider a stochastic optimization problem involving two random variables: a context variable X and a dependent variable Y. The objective is to minimize the expected value of a nonlinear loss functional applied to the conditional expectation E[f(X, Y,β) X], where f is a nonlinear function and β represents the decision variables. We focus on the practically important setting in which direct sampling from the conditional distribution of Y X is infeasible, and only a stream of i.i.d.\ observation pairs \(X^k, Y^k)\_k=0,1,2, is available. In our approach, the conditional expectation is approximated within a prespecified parametric function class. We analyze a simultaneous learning-and-optimization algorithm that jointly estimates the conditional expectation and optimizes the outer objective, and establish that the method achieves a convergence rate of order O(1/N), where N denotes the number of observed pairs.

Reproductions