SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 631640 of 1135 papers

TitleStatusHype
How well can LLMs Grade Essays in Arabic?0
Advancing Mathematical Reasoning in Language Models: The Impact of Problem-Solving Data, Data Synthesis Methods, and Training Stages0
Compositional Instruction Following with Language Models and Reinforcement Learning0
InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model0
Zero-shot and Few-shot Learning with Instruction-following LLMs for Claim Matching in Automated Fact-checking0
BAP v2: An Enhanced Task Framework for Instruction Following in Minecraft Dialogues0
DNA 1.0 Technical Report0
Iterative Label Refinement Matters More than Preference Optimization under Weak SupervisionCode0
Audio-CoT: Exploring Chain-of-Thought Reasoning in Large Audio Language Model0
A Comprehensive Evaluation of Large Language Models on Mental Illnesses in Arabic Context0
Show:102550
← PrevPage 64 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified