SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 101150 of 1135 papers

TitleStatusHype
Seedream 2.0: A Native Chinese-English Bilingual Image Generation Foundation ModelCode2
DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMsCode2
RouterEval: A Comprehensive Benchmark for Routing LLMs to Explore Model-level Scaling Up in LLMsCode2
Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward SystemsCode2
Rank1: Test-Time Compute for Reranking in Information RetrievalCode2
TESS 2: A Large-Scale Generalist Diffusion Language ModelCode2
mFollowIR: a Multilingual Benchmark for Instruction Following in RetrievalCode2
Critique Fine-Tuning: Learning to Critique is More Effective than Learning to ImitateCode2
MultiChallenge: A Realistic Multi-Turn Conversation Evaluation Benchmark Challenging to Frontier LLMsCode2
Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual FeedbackCode2
LLM-RG4: Flexible and Factual Radiology Report Generation across Diverse Input ContextsCode2
GeoGround: A Unified Large Vision-Language Model for Remote Sensing Visual GroundingCode2
LHRS-Bot-Nova: Improved Multimodal Large Language Model for Remote Sensing Vision-Language InterpretationCode2
Open6DOR: Benchmarking Open-instruction 6-DoF Object Rearrangement and A VLM-based ApproachCode2
Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language ModelsCode2
Multi-IF: Benchmarking LLMs on Multi-Turn and Multilingual Instructions FollowingCode2
Toward General Instruction-Following Alignment for Retrieval-Augmented GenerationCode2
TEOChat: A Large Vision-Language Assistant for Temporal Earth Observation DataCode2
Robin3D: Improving 3D Large Language Model via Robust Instruction TuningCode2
DeSTA2: Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning DataCode2
OmniBench: Towards The Future of Universal Omni-Language ModelsCode2
Archon: An Architecture Search Framework for Inference-Time TechniquesCode2
SciLitLLM: How to Adapt LLMs for Scientific Literature UnderstandingCode2
Autonomous Improvement of Instruction Following Skills via Foundation ModelsCode2
SeaLLMs 3: Open Foundation and Chat Multilingual Large Language Models for Southeast Asian LanguagesCode2
MMSci: A Dataset for Graduate-Level Multi-Discipline Multimodal Scientific UnderstandingCode2
Benchmarking Complex Instruction-Following with Multiple Constraints CompositionCode2
Dual-Space Knowledge Distillation for Large Language ModelsCode2
GAMA: A Large Audio-Language Model with Advanced Audio Understanding and Complex Reasoning AbilitiesCode2
RS-Agent: Automating Remote Sensing Tasks through Intelligent AgentCode2
F-LMM: Grounding Frozen Large Multimodal ModelsCode2
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuningCode2
GenAI Arena: An Open Evaluation Platform for Generative ModelsCode2
BLSP-Emo: Towards Empathetic Large Speech-Language ModelsCode2
Weak-to-Strong Search: Align Large Language Models via Searching over Small Language ModelsCode2
Self-Exploring Language Models: Active Preference Elicitation for Online AlignmentCode2
EditWorld: Simulating World Dynamics for Instruction-Following Image EditingCode2
Vikhr: Constructing a State-of-the-art Bilingual Open-Source Instruction-Following Large Language Model for RussianCode2
Grounded 3D-LLM with Referent TokensCode2
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-ExpertsCode2
From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large Language ModelsCode2
GSCo: Towards Generalizable AI in Medicine via Generalist-Specialist CollaborationCode2
Conifer: Improving Complex Constrained Instruction-Following Ability of Large Language ModelsCode2
Direct Preference Optimization of Video Large Multimodal Models from Language Model RewardCode2
Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You WantCode2
Top Leaderboard Ranking = Top Coding Proficiency, Always? EvoEval: Evolving Coding Benchmarks via LLMCode2
LITA: Language Instructed Temporal-Localization AssistantCode2
Chain-of-Spot: Interactive Reasoning Improves Large Vision-Language ModelsCode2
MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simulated-World ControlCode2
CoIN: A Benchmark of Continual Instruction tuNing for Multimodel Large Language ModelCode2
Show:102550
← PrevPage 3 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified