SOTAVerified

StrategyQA

StrategyQA aims to measure the ability of models to answer questions that require multi-step implicit reasoning.

Source: BIG-bench

Papers

Showing 3140 of 40 papers

TitleStatusHype
Tailoring Self-Rationalizers with Multi-Reward DistillationCode0
Large Language Models Are Also Good Prototypical Commonsense Reasoners0
Answering Unseen Questions With Smaller Language Models Using Rationale Generation and Dense Retrieval0
Teaching Smaller Language Models To Generalise To Unseen Compositional QuestionsCode0
Deduction under Perturbed Evidence: Probing Student Simulation Capabilities of Large Language Models0
Hint of Thought prompting: an explainable and zero-shot approach to reasoning tasks with LLMs0
Self-Evaluation Guided Beam Search for Reasoning0
Distilling Reasoning Capabilities into Smaller Language ModelsCode0
Learning to Decompose: Hypothetical Question Decomposition Based on Comparable Texts0
Better Retrieval May Not Lead to Better Question Answering0
Show:102550
← PrevPage 4 of 4Next →

No leaderboard results yet.