SOTAVerified

Learning to Stop Overthinking at Test Time

2025-02-16Unverified0· sign in to hype

Hieu Tran Bao, Nguyen Cong Dat, Nguyen Duc Anh, Hoang Thanh Tung

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Test time scaling is currently one of the most active research areas that shows promise after training time scaling has reached its limits. Deep-thinking (DT) models are a class of recurrent models that can perform easy-to-hard generalization by assigning more compute to harder test samples. However, due to their inability to determine the complexity of a test sample, DT models have to use a large amount of computation for both easy and hard test samples. Excessive test time computation is wasteful and can cause the ``overthinking'' problem where more test time computation leads to worse results. In this paper, we introduce a test time training method for determining the optimal amount of computation needed for each sample during test time. We also propose Conv-LiGRU, a novel recurrent architecture for efficient and robust visual reasoning. Extensive experiments demonstrate that Conv-LiGRU is more stable than DT, effectively mitigates the ``overthinking'' phenomenon, and achieves superior accuracy.

Tasks

Reproductions