SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 126150 of 1135 papers

TitleStatusHype
MMSci: A Dataset for Graduate-Level Multi-Discipline Multimodal Scientific UnderstandingCode2
Benchmarking Complex Instruction-Following with Multiple Constraints CompositionCode2
Dual-Space Knowledge Distillation for Large Language ModelsCode2
GAMA: A Large Audio-Language Model with Advanced Audio Understanding and Complex Reasoning AbilitiesCode2
RS-Agent: Automating Remote Sensing Tasks through Intelligent AgentCode2
F-LMM: Grounding Frozen Large Multimodal ModelsCode2
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuningCode2
GenAI Arena: An Open Evaluation Platform for Generative ModelsCode2
BLSP-Emo: Towards Empathetic Large Speech-Language ModelsCode2
Weak-to-Strong Search: Align Large Language Models via Searching over Small Language ModelsCode2
Self-Exploring Language Models: Active Preference Elicitation for Online AlignmentCode2
EditWorld: Simulating World Dynamics for Instruction-Following Image EditingCode2
Vikhr: Constructing a State-of-the-art Bilingual Open-Source Instruction-Following Large Language Model for RussianCode2
Grounded 3D-LLM with Referent TokensCode2
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-ExpertsCode2
From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large Language ModelsCode2
GSCo: Towards Generalizable AI in Medicine via Generalist-Specialist CollaborationCode2
Conifer: Improving Complex Constrained Instruction-Following Ability of Large Language ModelsCode2
Direct Preference Optimization of Video Large Multimodal Models from Language Model RewardCode2
Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You WantCode2
Top Leaderboard Ranking = Top Coding Proficiency, Always? EvoEval: Evolving Coding Benchmarks via LLMCode2
LITA: Language Instructed Temporal-Localization AssistantCode2
Chain-of-Spot: Interactive Reasoning Improves Large Vision-Language ModelsCode2
MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simulated-World ControlCode2
CoIN: A Benchmark of Continual Instruction tuNing for Multimodel Large Language ModelCode2
Show:102550
← PrevPage 6 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified