SOTAVerified

Clues Before Answers: Generation-Enhanced Multiple-Choice QA

2022-04-30NAACL 2022Code Available1· sign in to hype

Zixian Huang, Ao Wu, Jiaying Zhou, Yu Gu, Yue Zhao, Gong Cheng

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

A trending paradigm for multiple-choice question answering (MCQA) is using a text-to-text framework. By unifying data in different tasks into a single text-to-text format, it trains a generative encoder-decoder model which is both powerful and universal. However, a side effect of twisting a generation target to fit the classification nature of MCQA is the under-utilization of the decoder and the knowledge that can be decoded. To exploit the generation capability and underlying knowledge of a pre-trained encoder-decoder model, in this paper, we propose a generation-enhanced MCQA model named GenMC. It generates a clue from the question and then leverages the clue to enhance a reader for MCQA. It outperforms text-to-text models on multiple MCQA datasets.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
OpenBookQAGenMC 11BAccuracy89.8Unverified

Reproductions