SOTAVerified

Efficient Story Point Estimation With Comparative Learning

2026-03-16Unverified0· sign in to hype

Monoshiz Mahbub Khan, Xiaoyin Xi, Andrew Meneely, Yiming Tang, Zhe Yu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Story points are unitless, project-specific effort estimates that help developers plan their sprints. Traditionally, developers have collaboratively estimated story points using planning poker or other manual techniques. Machine learning can reduce this burden, but only with sufficient context from the historical decisions made by the project team. That is, state-of-the-art models, such as GPT2SP and FastText-SVM, only make accurate (within-project) predictions when they are trained on data from the same project. The goal of this study is to streamline story point estimation by evaluating a comparative learning-based framework for calibrating project-specific story point prediction models. Instead of assigning a specific story point value to every backlog item, developers are presented with pairs of items and asked to indicate which item requires more effort. Using these comparative judgments, a machine learning model was trained to predict the story point estimates. We empirically evaluated our technique using data from 23,313 manual estimates across 16 projects. The model trained on comparative judgments achieved, on average, a 0.34 Spearman's rank correlation coefficient between its predictions and the ground truth story points. This is similar to, if not better than, the performance of a state-of-the-art regression model trained on ground truth story points. Through human subject experiments, the advantages of comparative judgments were validated - higher confidence, lower annotation time, and comparable agreement were observed for comparative judgments compared to direct ratings. In summary, the proposed comparative learning approach is more efficient than regression-based approaches, given its better performance, lower required annotation time, and higher training data reliability.

Reproductions