Adversarial Meta-Learning of Gamma-Minimax Estimators That Leverage Prior Knowledge
Hongxiang Qiu, Alex Luedtke
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/QIU-Hongxiang-David/Gamma-minimax-learninngOfficialIn paperpytorch★ 0
- github.com/qiu-hongxiang-david/gamma-minimax-learningOfficialIn paperpytorch★ 0
Abstract
Bayes estimators are well known to provide a means to incorporate prior knowledge that can be expressed in terms of a single prior distribution. However, when this knowledge is too vague to express with a single prior, an alternative approach is needed. Gamma-minimax estimators provide such an approach. These estimators minimize the worst-case Bayes risk over a set of prior distributions that are compatible with the available knowledge. Traditionally, Gamma-minimaxity is defined for parametric models. In this work, we define Gamma-minimax estimators for general models and propose adversarial meta-learning algorithms to compute them when the set of prior distributions is constrained by generalized moments. Accompanying convergence guarantees are also provided. We also introduce a neural network class that provides a rich, but finite-dimensional, class of estimators from which a Gamma-minimax estimator can be selected. We illustrate our method in two settings, namely entropy estimation and a prediction problem that arises in biodiversity studies.