SOTAVerified

Contextual Feedback Loops: Amplifying Deep Reasoning with Iterative Top-Down Feedback

2024-12-23Code Available0· sign in to hype

Jacob Fein-Ashley, Rajgopal Kannan, Viktor Prasanna

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Conventional deep networks rely on one-way backpropagation that overlooks reconciling high-level predictions with lower-level representations. We propose Contextual Feedback Loops (CFLs), a lightweight mechanism that re-injects top-down context into earlier layers for iterative refinement. Concretely, CFLs map the network's prediction to a compact context vector, which is fused back into each layer via gating adapters. Unrolled over multiple feedback steps, CFLs unify feed-forward and feedback-driven inference, letting top-level outputs continually refine lower-level features. Despite minimal overhead, CFLs yield consistent gains on tasks including CIFAR-10, ImageNet-1k, SpeechCommands, and GLUE SST-2. Moreover, by a Banach Fixed Point argument under mild Lipschitz conditions, these updates converge stably. Overall, CFLs show that even modest top-down feedback can substantially improve deep models, aligning with cognitive theories of iterative perception.

Tasks

Reproductions