SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 391400 of 1135 papers

TitleStatusHype
Improving Translation Faithfulness of Large Language Models via Augmenting InstructionsCode1
Instruction Position Matters in Sequence Generation with Large Language ModelsCode1
InstructionGPT-4: A 200-Instruction Paradigm for Fine-Tuning MiniGPT-4Code1
Context-Aware Planning and Environment-Aware Memory for Instruction Following Embodied AgentsCode1
VisIT-Bench: A Benchmark for Vision-Language Instruction Following Inspired by Real-World UseCode1
Self-Alignment with Instruction BacktranslationCode1
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question AnsweringCode1
AlpaGasus: Training A Better Alpaca with Fewer DataCode1
Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical StudyCode1
Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generatorsCode1
Show:102550
← PrevPage 40 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified