SOTAVerified

Kolmogorov Complexity Bounds for LLM Steganography and a Perplexity-Based Detection Proxy

2026-03-23Unverified0· sign in to hype

Andrii Shportko

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Large language models can rewrite text to embed hidden payloads while preserving surface-level meaning, a capability that opens covert channels between cooperating AI systems and poses challenges for alignment monitoring. We study the information-theoretic cost of such embedding. Our main result is that any steganographic scheme that preserves the semantic load of a covertext~M_1 while encoding a payload~P into a stegotext~M_2 must satisfy K(M_2) K(M_1) + K(P) - O( n), where K denotes Kolmogorov complexity and n is the combined message length. A corollary is that any non-trivial payload forces a strict complexity increase in the stegotext, regardless of how cleverly the encoder distributes the signal. Because Kolmogorov complexity is uncomputable, we ask whether practical proxies can detect this predicted increase. Drawing on the classical correspondence between lossless compression and Kolmogorov complexity, we argue that language-model perplexity occupies an analogous role in the probabilistic regime and propose the Binoculars perplexity-ratio score as one such proxy. Preliminary experiments with a color-based LLM steganographic scheme support the theoretical prediction: a paired t-test over 300 samples yields t = 5.11, p < 10^-6.

Reproductions