SOTAVerified

MDL-motivated compression of GLM ensembles increases interpretability and retains predictive power

2016-11-21Unverified0· sign in to hype

Boris Hayete, Matthew Valko, Alex Greenfield, Raymond Yan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Over the years, ensemble methods have become a staple of machine learning. Similarly, generalized linear models (GLMs) have become very popular for a wide variety of statistical inference tasks. The former have been shown to enhance out- of-sample predictive power and the latter possess easy interpretability. Recently, ensembles of GLMs have been proposed as a possibility. On the downside, this approach loses the interpretability that GLMs possess. We show that minimum description length (MDL)-motivated compression of the inferred ensembles can be used to recover interpretability without much, if any, downside to performance and illustrate on a number of standard classification data sets.

Tasks

Reproductions