SOTAVerified

Multiple Choice Question Answering (MCQA)

A multiple-choice question (MCQ) is composed of two parts: a stem that identifies the question or problem, and a set of alternatives or possible answers that contain a key that is the best answer to the question, and a number of distractors that are plausible but incorrect answers to the question.

In a k-way MCQA task, a model is provided with a question q, a set of candidate options O = {O1, . . . , Ok}, and a supporting context for each option C = {C1, . . . , Ck}. The model needs to predict the correct answer option that is best supported by the given contexts.

Papers

Showing 2130 of 65 papers

TitleStatusHype
CP-Router: An Uncertainty-Aware Router Between LLM and LRM0
Improving LLM First-Token Predictions in Multiple-Choice Question Answering via Prefilling Attack0
Healthy LLMs? Benchmarking LLM Knowledge of UK Government Public Health Information0
Question-Aware Knowledge Graph Prompting for Enhancing Large Language ModelsCode0
Correctness Coverage Evaluation for Medical Multiple-Choice Question Answering Based on the Enhanced Conformal Prediction Framework0
Med-RLVR: Emerging Medical Reasoning from a 3B base model via reinforcement Learning0
Wrong Answers Can Also Be Useful: PlausibleQA -- A Large-Scale QA Dataset with Answer Plausibility ScoresCode0
Which of These Best Describes Multiple Choice Evaluation with LLMs? A) Forced B) Flawed C) Fixable D) All of the Above0
Investigating the Shortcomings of LLMs in Step-by-Step Legal ReasoningCode0
First Token Probability Guided RAG for Telecom Question Answering0
Show:102550
← PrevPage 3 of 7Next →

No leaderboard results yet.