SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 201225 of 1135 papers

TitleStatusHype
NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language ModelsCode2
PandaGPT: One Model To Instruction-Follow Them AllCode2
ExpertPrompting: Instructing Large Language Models to be Distinguished ExpertsCode2
Lion: Adversarial Distillation of Proprietary Large Language ModelsCode2
Large Language Model Instruction Following: A Survey of Progresses and ChallengesCode2
Precise Zero-Shot Dense Retrieval without Relevance LabelsCode2
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and ActionCode2
The Replica Dataset: A Digital Replica of Indoor SpacesCode2
Habitat: A Platform for Embodied AI ResearchCode2
InstructTTSEval: Benchmarking Complex Natural-Language Instruction Following in Text-to-Speech SystemsCode1
Adversarial Paraphrasing: A Universal Attack for Humanizing AI-Generated TextCode1
RewardAnything: Generalizable Principle-Following Reward ModelsCode1
Incentivizing Reasoning for Advanced Instruction-Following of Large Language ModelsCode1
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space TransformationCode1
Speech-IFEval: Evaluating Instruction-Following and Quantifying Catastrophic Forgetting in Speech-Aware Language ModelsCode1
STRICT: Stress Test of Rendering Images Containing TextCode1
OmniGenBench: A Benchmark for Omnipotent Multimodal Generation across 50+ TasksCode1
IDA-Bench: Evaluating LLMs on Interactive Guided Data AnalysisCode1
AGENTIF: Benchmarking Instruction Following of Large Language Models in Agentic ScenariosCode1
Scaling Reasoning, Losing Control: Evaluating Instruction Following in Large Reasoning ModelsCode1
GIE-Bench: Towards Grounded Evaluation for Text-Guided Image EditingCode1
BLEUBERI: BLEU is a surprisingly effective reward for instruction followingCode1
MergeBench: A Benchmark for Merging Domain-Specialized LLMsCode1
A Multi-Dimensional Constraint Framework for Evaluating and Improving Instruction Following in Large Language ModelsCode1
MM-Skin: Enhancing Dermatology Vision-Language Model with an Image-Text Dataset Derived from TextbooksCode1
Show:102550
← PrevPage 9 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified