SOTAVerified

Multiple Choice Question Answering (MCQA)

A multiple-choice question (MCQ) is composed of two parts: a stem that identifies the question or problem, and a set of alternatives or possible answers that contain a key that is the best answer to the question, and a number of distractors that are plausible but incorrect answers to the question.

In a k-way MCQA task, a model is provided with a question q, a set of candidate options O = {O1, . . . , Ok}, and a supporting context for each option C = {C1, . . . , Ck}. The model needs to predict the correct answer option that is best supported by the given contexts.

Papers

Showing 2650 of 65 papers

TitleStatusHype
Learning to Attend On Essential Terms: An Enhanced Retriever-Reader Model for Open-domain Question AnsweringCode0
Wrong Answers Can Also Be Useful: PlausibleQA -- A Large-Scale QA Dataset with Answer Plausibility ScoresCode0
BloombergGPT: A Large Language Model for FinanceCode0
MedG-KRP: Medical Graph Knowledge Representation ProbingCode0
BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicineCode0
MMM: Multi-stage Multi-task Learning for Multi-choice Reading ComprehensionCode0
Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question?Code0
Question-Aware Knowledge Graph Prompting for Enhancing Large Language ModelsCode0
Role of Language Relatedness in Multilingual Fine-tuning of Language Models: A Case Study in Indo-Aryan LanguagesCode0
Differentiating Choices via Commonality for Multiple-Choice Question AnsweringCode0
Does Transliteration Help Multilingual Language Modeling?Code0
EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential ReasoningCode0
Long Story Short: Story-level Video Understanding from 20K Short Films0
Context-guided Triple Matching for Multiple Choice Question Answering0
LLM Distillation for Efficient Few-Shot Multiple Choice Question Answering0
Visual7W: Grounded Question Answering in Images0
Context-guided Triple Matching for Multiple Choice Question Answering0
Context Modeling with Evidence Filter for Multiple Choice Question Answering0
LLMs May Perform MCQA by Selecting the Least Incorrect Option0
Med-RLVR: Emerging Medical Reasoning from a 3B base model via reinforcement Learning0
Correctness Coverage Evaluation for Medical Multiple-Choice Question Answering Based on the Enhanced Conformal Prediction Framework0
Multi-source Meta Transfer for Low Resource Multiple-Choice Question Answering0
CP-Router: An Uncertainty-Aware Router Between LLM and LRM0
What do we expect from Multiple-choice QA Systems?0
Fine-tuning BERT with Focus Words for Explanation Regeneration0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.