SOTAVerified

Segment-Level Attribution for Selective Learning of Long Reasoning Traces

2026-01-31Code Available0· sign in to hype

Siyuan Wang, Yanchen Liu, Xiang Ren

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Large Reasoning Models (LRMs) achieve strong reasoning performance by generating long chains of thought (CoTs), yet only a small fraction of these traces meaningfully contributes to answer prediction, while the majority contains repetitive or truncated content. Such output redundancy is further propagated after supervised finetuning (SFT), as models learn to imitate verbose but uninformative patterns, which can degrade performance. To this end, we incorporate integrated gradient attribution to quantify each token's influence on final answers and aggregate them into two segment-level metrics: (1) attribution strength measures the overall attribution magnitude; and (2) direction consistency captures whether tokens' attributions within a segment are uniformly positive or negative (high consistency), or a mixture of both (moderate consistency). Based on these two metrics, we propose a segment-level selective learning framework to identify important segments with high attribution strength but moderate consistency that indicate reflective rather than shallow reasoning. The framework then applies selective SFT on these important segments while masking loss for unimportant ones. Experiments across multiple models and datasets show that our approach improves accuracy and output efficiency, enabling more effective learning from long reasoning traces~Code and data are available at https://github.com/SiyuanWangw/SegmentSelectiveSFT.

Reproductions