SOTAVerified

ATHENA: Mathematical Reasoning with Thought Expansion

2023-11-02EMNLP 2023Code Available0· sign in to hype

JB. Kim, Hazel Kim, Joonghyuk Hahn, Yo-Sub Han

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Solving math word problems depends on how to articulate the problems, the lens through which models view human linguistic expressions. Real-world settings count on such a method even more due to the diverse practices of the same mathematical operations. Earlier works constrain available thinking processes by limited prediction strategies without considering their significance in acquiring mathematical knowledge. We introduce Attention-based THought Expansion Network Architecture (ATHENA) to tackle the challenges of real-world practices by mimicking human thought expansion mechanisms in the form of neural network propagation. A thought expansion recurrently generates the candidates carrying the thoughts of possible math expressions driven from the previous step and yields reasonable thoughts by selecting the valid pathways to the goal. Our experiments show that ATHENA achieves a new state-of-the-art stage toward the ideal model that is compelling in variant questions even when the informativeness in training examples is restricted.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ASDiv-AATHENA (roberta-base)Execution Accuracy86.4Unverified
ASDiv-AATHENA (roberta-large)Execution Accuracy91Unverified
Math23KATHENA (roberta-base)Accuracy (training-test)84.4Unverified
Math23KATHENA (roberta-large)Accuracy (training-test)86.5Unverified
MAWPSATHENA (roberta-base)Accuracy (%)92.2Unverified
MAWPSATHENA (roberta-large)Accuracy (%)93Unverified
SVAMPATHENA (roberta-large)Execution Accuracy54.8Unverified
SVAMPATHENA (roberta-base)Execution Accuracy45.6Unverified
SVAMP (1:N)ATHENA (roberta-base)Execution Accuracy52.5Unverified
SVAMP (1:N)ATHENA (roberta-large)Execution Accuracy67.8Unverified

Reproductions