SOTAVerified

Some Inapproximability Results of MAP Inference and Exponentiated Determinantal Point Processes

2021-09-02Unverified0· sign in to hype

Naoto Ohsaka

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study the computational complexity of two hard problems on determinantal point processes (DPPs). One is maximum a posteriori (MAP) inference, i.e., to find a principal submatrix having the maximum determinant. The other is probabilistic inference on exponentiated DPPs (E-DPPs), which can sharpen or weaken the diversity preference of DPPs with an exponent parameter p. We present several complexity-theoretic hardness results that explain the difficulty in approximating MAP inference and the normalizing constant for E-DPPs. We first prove that unconstrained MAP inference for an n n matrix is NP-hard to approximate within a factor of 2^ n, where = 10^-10^13 . This result improves upon the best-known inapproximability factor of (98-), and rules out the existence of any polynomial-factor approximation algorithm assuming P NP. We then show that log-determinant maximization is NP-hard to approximate within a factor of 54 for the unconstrained case and within a factor of 1+10^-10^13 for the size-constrained monotone case. In particular, log-determinant maximization does not admit a polynomial-time approximation scheme unless P = NP. As a corollary of the first result, we demonstrate that the normalizing constant for E-DPPs of any (fixed) constant exponent p ^-1 = 10^10^13 is NP-hard to approximate within a factor of 2^ pn, which is in contrast to the case of p 1 admitting a fully polynomial-time randomized approximation scheme.

Tasks

Reproductions