SOTAVerified

Criterion-referenceability determines LLM-as-a-judge validity across physics assessment formats

2026-03-16Unverified0· sign in to hype

Will Yeadon, Tom Hardy, Paul Mackay, Elise Agra

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

As large language models (LLMs) are increasingly considered for automated assessment and feedback, understanding when LLM marking can be trusted is essential. We evaluate LLM-as-a-judge marking across three physics assessment formats - structured questions, written essays, and scientific plots - comparing GPT-5.2, Grok 4.1, Claude Opus 4.5, DeepSeek-V3.2, Gemini Pro 3, and committee aggregations against human markers under blind, solution-provided, false-solution, and exemplar-anchored conditions. For n=771 blind university exam questions, models achieve fractional mean absolute errors (fMAE) 0.22 with robust discriminative validity (Spearman ρ> 0.6). For secondary and university structured questions (n=1151), providing official solutions reduces MAE and strengthens validity (committee ρ= 0.88); false solutions degrade absolute accuracy but leave rank ordering largely intact (committee ρ= 0.77; individual models ρ 0.59). Essay marking behaves fundamentally differently. Across n=55 scripts (n=275 essays), blind AI marking is harsher and more variable than human marking, with discriminative validity already poor (ρ 0.1). Adding a mark scheme does not improve discrimination (ρ 0; all confidence intervals include zero). Anchored exemplars shift the AI mean close to the human mean and compress variance below the human standard deviation, but discriminative validity remains near-zero - distributional agreement can occur without valid discrimination. For code-based plot elements (n=1400), models achieve exceptionally high discriminative validity (ρ> 0.84) with near-linear calibration. Across all task types, validity tracks criterion-referenceability - the extent to which a task maps to explicit, observable grading features - and benchmark reliability, rather than raw model capability.

Reproductions