SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 11211130 of 1135 papers

TitleStatusHype
DialMAT: Dialogue-Enabled Transformer with Moment-Based Adversarial TrainingCode0
Robust Multi-Objective Controlled Decoding of Large Language ModelsCode0
Monolingual or Multilingual Instruction Tuning: Which Makes a Better AlpacaCode0
Compositionality as Lexical SymmetryCode0
Playpen: An Environment for Exploring Learning Through Conversational InteractionCode0
RoleMRC: A Fine-Grained Composite Benchmark for Role-Playing and Instruction-FollowingCode0
LIFBench: Evaluating the Instruction Following Performance and Stability of Large Language Models in Long-Context ScenariosCode0
LIFEBench: Evaluating Length Instruction Following in Large Language ModelsCode0
Disperse-Then-Merge: Pushing the Limits of Instruction Tuning via Alignment Tax ReductionCode0
Being Strong Progressively! Enhancing Knowledge Distillation of Large Language Models through a Curriculum Learning FrameworkCode0
Show:102550
← PrevPage 113 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified