Quasi-Bayesian sequential deconvolution
Stefano Favaro, Sandra Fortini
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Density deconvolution deals with the estimation of the probability density function f of a random signal from n1 data observed with independent and known additive random noise. This is a classical problem in statistics, for which frequentist and Bayesian nonparametric approaches are available to estimate f in static or batch domains. In this paper, we consider the problem of density deconvolution in a streaming or online domain, and develop a principled sequential approach to estimate f. By relying on a quasi-Bayesian sequential (learning) model for the data, often referred to as Newton's algorithm, we obtain a sequential deconvolution estimate f_n of f that is of easy evaluation, computationally efficient, and with constant computational cost as data increase, which is desirable for streaming data. In particular, local and uniform Gaussian central limit theorems for f_n are established, leading to asymptotic credible intervals and bands for f, respectively. We provide the sequential deconvolution estimate f_n with large sample asymptotic guarantees under the quasi-Bayesian sequential model for the data, proving a merging with respect to the direct density estimation problem, and also under a ``true" frequentist model for the data, proving consistency. An empirical validation of our methods is presented on synthetic and real data, also comparing with respect to a kernel approach and a Bayesian nonparametric approach with a Dirichlet process mixture prior.