SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 331340 of 1135 papers

TitleStatusHype
On the Multi-turn Instruction Following for Conversational Web AgentsCode1
INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval ModelsCode1
PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt TuningCode1
Your Vision-Language Model Itself Is a Strong Filter: Towards High-Quality Instruction Tuning with Data SelectionCode1
Multi-modal Preference Alignment Remedies Degradation of Visual Instruction Tuning on Language ModelsCode1
Answer is All You Need: Instruction-following Text Embedding via Answering the QuestionCode1
Aya Dataset: An Open-Access Collection for Multilingual Instruction TuningCode1
Personalized Language Modeling from Personalized Human FeedbackCode1
A Survey on Data Selection for LLM Instruction TuningCode1
SelectLLM: Can LLMs Select Important Instructions to Annotate?Code1
Show:102550
← PrevPage 34 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified