SOTAVerified

Complexity-aware fine-tuning

2025-06-26Code Available0· sign in to hype

Andrey Goncharov, Daniil Vyazhev, Petr Sychev, Edvard Khalafyan, Alexey Zaytsev

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

General-purpose Large Language Models (LLMs) are frequently fine-tuned through supervised fine-tuning (SFT) to enhance performance in specific domains. Better results can be achieved by distilling the chain-of-thought of a larger model at the cost of numerous expensive calls and a much greater amount of data. We propose a novel blueprint for efficient fine-tuning that uses reasoning only for complex data identified by entropy. Specifically, across two small open models ( 3B) we split the training data into complexity categories by a single token answer entropy (ROC AUC 0.73), fine-tune large language models (LLMs) via SFT and distillation, and show that our pipeline significantly outperforms the standard SFT approach (0.55 vs 0.43 average accuracy) and provides comparable with distillation performance while using 62\% less data (0.55 average accuracy for both). We publish our code and data to facilitate further research in this direction.

Reproductions