SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 241250 of 1135 papers

TitleStatusHype
Training an LLM-as-a-Judge Model: Pipeline, Insights, and Practical Lessons0
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model0
Shuttle Between the Instructions and the Parameters of Large Language Models0
CoDe: Blockwise Control for Denoising Diffusion ModelsCode0
BARE: Leveraging Base Language Models for Few-Shot Synthetic Data Generation0
Learning Human Perception Dynamics for Informative Robot Communication0
Disentangling Length Bias In Preference Learning Via Response-Conditioned Modeling0
ReFoRCE: A Text-to-SQL Agent with Self-Refinement, Format Restriction, and Column Exploration0
mFollowIR: a Multilingual Benchmark for Instruction Following in RetrievalCode2
Rethinking Bottlenecks in Safety Fine-Tuning of Vision Language Models0
Show:102550
← PrevPage 25 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified