SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 141150 of 1135 papers

TitleStatusHype
Diffusion-ES: Gradient-free Planning with Diffusion for Autonomous Driving and Zero-Shot Instruction FollowingCode2
DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMsCode2
Dual-Space Knowledge Distillation for Large Language ModelsCode2
MiniLLM: Knowledge Distillation of Large Language ModelsCode2
DeSTA2.5-Audio: Toward General-Purpose Large Audio Language Model with Self-Generated Cross-Modal AlignmentCode2
Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward SystemsCode2
#InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language ModelsCode2
Chain-of-Spot: Interactive Reasoning Improves Large Vision-Language ModelsCode2
InFoBench: Evaluating Instruction Following Ability in Large Language ModelsCode2
How Instruction and Reasoning Data shape Post-Training: Data Quality through the Lens of Layer-wise GradientsCode2
Show:102550
← PrevPage 15 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified