SOTAVerified

Not All Options Are Created Equal: Textual Option Weighting for Token-Efficient LLM-Based Knowledge Tracing

2024-10-14Unverified0· sign in to hype

JongWoo Kim, SeongYeub Chu, Bryan Wong, Mun Yi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Large Language Models (LLMs) have recently emerged as promising tools for knowledge tracing (KT) due to their strong reasoning and generalization abilities. While recent LLM-based KT methods have proposed new prompt formats, they struggle to represent the full interaction histories of example learners within a single prompt during in-context learning (ICL), resulting in limited scalability and high computational cost under token constraints. In this work, we present LLM-based Option-weighted Knowledge Tracing (LOKT), a simple yet effective framework that encodes the interaction histories of example learners in context as textual categorical option weights (TCOW). TCOW are semantic labels (e.g., ``inadequate'') assigned to the options selected by learners when answering questions, enhancing the interpretability of LLMs. Experiments on multiple-choice datasets show that LOKT outperforms existing non-LLM and LLM-based KT models in both cold-start and warm-start settings. Moreover, LOKT enables scalable and cost-efficient inference, achieving strong performance even under strict token constraints. Our code is available at https://anonymous.4open.science/r/LOKT\_model-3233.

Tasks

Reproductions