SOTAVerified

One-Shot Learning for k-SAT

2025-02-10Unverified0· sign in to hype

Andreas Galanis, Leslie Ann Goldberg, Xusheng Zhang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Consider a k-SAT formula where every variable appears at most d times, and let be a satisfying assignment of sampled proportionally to e^ m() where m() is the number of variables set to true and is a real parameter. Given and , can we learn the value of efficiently? This problem falls into a recent line of works about single-sample ("one-shot") learning of Markov random fields. The k-SAT setting we consider here was recently studied by Galanis, Kandiros, and Kalavasis (SODA'24) where they showed that single-sample learning is possible when roughly d 2^k/6.45 and impossible when d (k+1) 2^k-1. Crucially, for their impossibility results they used the existence of unsatisfiable instances which, aside from the gap in d, left open the question of whether the feasibility threshold for one-shot learning is dictated by the satisfiability threshold of k-SAT formulas of bounded degree. Our main contribution is to answer this question negatively. We show that one-shot learning for k-SAT is infeasible well below the satisfiability threshold; in fact, we obtain impossibility results for degrees d as low as k^2 when is sufficiently large, and bootstrap this to small values of when d scales exponentially with k, via a probabilistic construction. On the positive side, we simplify the analysis of the learning algorithm and obtain significantly stronger bounds on d in terms of . In particular, for the uniform case 0 that has been studied extensively in the sampling literature, our analysis shows that learning is possible under the condition d 2^k/2. This is nearly optimal (up to constant factors) in the sense that it is known that sampling a uniformly-distributed satisfying assignment is NP-hard for d 2^k/2.

Tasks

Reproductions