SOTAVerified

Boosting Large Language Models with Socratic Method for Conversational Mathematics Teaching

2024-07-24Code Available1· sign in to hype

Yuyang Ding, Hanglei Hu, Jie zhou, Qin Chen, Bo Jiang, Liang He

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

With the introduction of large language models (LLMs), automatic math reasoning has seen tremendous success. However, current methods primarily focus on providing solutions or using techniques like Chain-of-Thought to enhance problem-solving accuracy. In this paper, we focus on improving the capability of mathematics teaching via a Socratic teaching-based LLM (SocraticLLM), which guides learners toward profound thinking with clarity and self-discovery via conversation. We collect and release a high-quality mathematical teaching dataset, named SocraticMATH, which provides Socratic-style conversations of problems with extra knowledge. Also, we propose a knowledge-enhanced LLM as a strong baseline to generate reliable responses with review, guidance/heuristic, rectification, and summarization. Experimental results show the great advantages of SocraticLLM by comparing it with several strong generative models. The codes and datasets are available on https://github.com/ECNU-ICALK/SocraticMath.

Tasks

Reproductions