SOTAVerified

Multiple Choice Question Answering (MCQA)

A multiple-choice question (MCQ) is composed of two parts: a stem that identifies the question or problem, and a set of alternatives or possible answers that contain a key that is the best answer to the question, and a number of distractors that are plausible but incorrect answers to the question.

In a k-way MCQA task, a model is provided with a question q, a set of candidate options O = {O1, . . . , Ok}, and a supporting context for each option C = {C1, . . . , Ck}. The model needs to predict the correct answer option that is best supported by the given contexts.

Papers

Showing 5160 of 65 papers

TitleStatusHype
Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question?Code0
FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domainCode0
From Recognition to Cognition: Visual Commonsense ReasoningCode0
From Multiple-Choice to Extractive QA: A Case Study for English and ArabicCode0
Investigating the Shortcomings of LLMs in Step-by-Step Legal ReasoningCode0
Learning to Attend On Essential Terms: An Enhanced Retriever-Reader Model for Open-domain Question AnsweringCode0
Wrong Answers Can Also Be Useful: PlausibleQA -- A Large-Scale QA Dataset with Answer Plausibility ScoresCode0
MedG-KRP: Medical Graph Knowledge Representation ProbingCode0
MMM: Multi-stage Multi-task Learning for Multi-choice Reading ComprehensionCode0
KnowledgePrompts: Exploring the Abilities of Large Language Models to Solve Proportional Analogies via Knowledge-Enhanced PromptingCode0
Show:102550
← PrevPage 6 of 7Next →

No leaderboard results yet.