Data fission: splitting a single data point
James Leiner, Boyan Duan, Larry Wasserman, Aaditya Ramdas
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Suppose we observe a random vector X from some distribution P in a known family with unknown parameters. We ask the following question: when is it possible to split X into two parts f(X) and g(X) such that neither part is sufficient to reconstruct X by itself, but both together can recover X fully, and the joint distribution of (f(X),g(X)) is tractable? As one example, if X=(X_1,,X_n) and P is a product distribution, then for any m<n, we can split the sample to define f(X)=(X_1,,X_m) and g(X)=(X_m+1,,X_n). Rasines and Young (2022) offers an alternative approach that uses additive Gaussian noise -- this enables post-selection inference in finite samples for Gaussian distributed data and asymptotically when errors are non-Gaussian. In this paper, we offer a more general methodology for achieving such a split in finite samples by borrowing ideas from Bayesian inference to yield a (frequentist) solution that can be viewed as a continuous analog of data splitting. We call our method data fission, as an alternative to data splitting, data carving and p-value masking. We exemplify the method on a few prototypical applications, such as post-selection inference for trend filtering and other regression problems.