SOTAVerified

StrategyQA

StrategyQA aims to measure the ability of models to answer questions that require multi-step implicit reasoning.

Source: BIG-bench

Papers

Showing 2640 of 40 papers

TitleStatusHype
Dialectical Behavior Therapy Approach to LLM Prompting0
Fusing Bidirectional Chains of Thought and Reward Mechanisms A Method for Enhancing Question-Answering Capabilities of Large Language Models for Chinese Intangible Cultural Heritage0
IAG: Induction-Augmented Generation Framework for Answering Reasoning Questions0
Improving Attributed Text Generation of Large Language Models via Preference Learning0
Large Language Models Are Also Good Prototypical Commonsense Reasoners0
Learning to Decompose: Hypothetical Question Decomposition Based on Comparable Texts0
Advancing Process Verification for Large Language Models via Tree-Based Preference Learning0
Proof of Thought : Neurosymbolic Program Synthesis allows Robust and Interpretable Reasoning0
Question-Analysis Prompting Improves LLM Performance in Reasoning Tasks0
Rule-Guided Feedback: Enhancing Reasoning by Enforcing Rule Adherence in Large Language Models0
Self-Evaluation Guided Beam Search for Reasoning0
Hint of Thought prompting: an explainable and zero-shot approach to reasoning tasks with LLMs0
The ART of LLM Refinement: Ask, Refine, and Trust0
Towards Uncertainty-Aware Language Agent0
Unraveling Indirect In-Context Learning Using Influence Functions0
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.