SOTAVerified

Exact Expressive Power of Transformers with Padding

2025-05-25Unverified0· sign in to hype

William Merrill, Ashish Sabharwal

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Chain of thought is a natural inference-time method for increasing the computational power of transformer-based large language models (LLMs), but comes at the cost of sequential decoding. Are there more efficient alternatives to expand a transformer's expressive power without adding parameters? We consider transformers with padding tokens as a form of parallelizable test-time compute. We show that averaging-hard-attention, masked-pre-norm transformers with polynomial padding converge to precisely the class TC^0 of extremely parallelizable problems. While the TC^0 upper bound was known, proving a matching lower bound had been elusive. Further, our novel analysis reveals the precise expanded power of padded transformers when coupled with another form of inference-time compute, namely dynamically increasing depth via looping. Our core technical contribution is to show how padding helps bring the notions of complete problems and reductions, which have been a cornerstone of classical complexity theory, to the formal study of transformers. Armed with this new tool, we prove that padded transformers with O(^d n) looping on inputs of length n recognize exactly the class TC^d of moderately parallelizable problems. Thus, padding and looping together systematically expand transformers' expressive power: with polylogarithmic looping, padded transformers converge to the class NC, the best that could be expected without losing parallelism (unless NC = P). Our results thus motivate further exploration of padding and looping as parallelizable alternatives to chain of thought.

Tasks

Reproductions