Can LLMs Design Good Questions Based on Context?
2025-01-07Code Available1· sign in to hype
Yueheng Zhang, Xiaoyuan Liu, Yiyou Sun, Atheer Alharbi, Hend Alzahrani, Basel Alomair, Dawn Song
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/colearn-dev/llmqgOfficialIn papernone★ 10
Abstract
This paper evaluates questions generated by LLMs from context, comparing them to human-generated questions across six dimensions. We introduce an automated LLM-based evaluation method, focusing on aspects like question length, type, context coverage, and answerability. Our findings highlight unique characteristics of LLM-generated questions, contributing insights that can support further research in question quality and downstream applications.