SOTAVerified

Can Language Models Evaluate Human Written Text? Case Study on Korean Student Writing for Education

2024-07-24Code Available0· sign in to hype

Seungyoon Kim, Seungone Kim

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Large language model (LLM)-based evaluation pipelines have demonstrated their capability to robustly evaluate machine-generated text. Extending this methodology to assess human-written text could significantly benefit educational settings by providing direct feedback to enhance writing skills, although this application is not straightforward. In this paper, we investigate whether LLMs can effectively assess human-written text for educational purposes. We collected 100 texts from 32 Korean students across 15 types of writing and employed GPT-4-Turbo to evaluate them using grammaticality, fluency, coherence, consistency, and relevance as criteria. Our analyses indicate that LLM evaluators can reliably assess grammaticality and fluency, as well as more objective types of writing, though they struggle with other criteria and types of writing. We publicly release our dataset and feedback.

Tasks

Reproductions