SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 101150 of 1135 papers

TitleStatusHype
When Thinking Fails: The Pitfalls of Reasoning for Instruction-Following in LLMs0
GuideBench: Benchmarking Domain-Oriented Guideline Following for LLM Agents0
BLEUBERI: BLEU is a surprisingly effective reward for instruction followingCode1
MergeBench: A Benchmark for Merging Domain-Specialized LLMsCode1
Navigating the Alpha Jungle: An LLM-Powered MCTS Framework for Formulaic Factor Mining0
UniEval: Unified Holistic Evaluation for Unified Multimodal Understanding and Generation0
Tests as Prompt: A Test-Driven-Development Benchmark for LLM Code Generation0
HealthBench: Evaluating Large Language Models Towards Improved Human HealthCode7
Judging the Judges: Can Large Vision-Language Models Fairly Evaluate Chart Comprehension and Reasoning?Code0
A Multi-Dimensional Constraint Framework for Evaluating and Improving Instruction Following in Large Language ModelsCode1
Efficient Telecom Specific LLM: TSLAM-Mini with QLoRA and Digital Twin Data0
Assessing Robustness to Spurious Correlations in Post-Training Language Models0
MM-Skin: Enhancing Dermatology Vision-Language Model with an Image-Text Dataset Derived from TextbooksCode1
Adaptive Markup Language Generation for Contextually-Grounded Visual Document UnderstandingCode1
T2VTextBench: A Human Evaluation Benchmark for Textual Control in Video Generation Models0
LLaMA-Omni2: LLM-based Real-time Spoken Chatbot with Autoregressive Streaming Speech SynthesisCode3
Incentivizing Inclusive Contributions in Model Sharing Markets0
PIPA: A Unified Evaluation Protocol for Diagnosing Interactive Planning Agents0
T2VPhysBench: A First-Principles Benchmark for Physical Consistency in Text-to-Video Generation0
Ask, Fail, Repeat: Meeseeks, an Iterative Feedback Benchmark for LLMs' Multi-turn Instruction-Following Ability0
UAV-VLN: End-to-End Vision Language guided Navigation for UAVs0
TF1-EN-3M: Three Million Synthetic Moral Fables for Training Small, Open Language ModelsCode0
CachePrune: Neural-Based Attribution Defense Against Indirect Prompt Injection Attacks0
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs0
ParamΔ for Direct Weight Mixing: Post-Train Large Language Model at Zero Cost0
ManipDreamer: Boosting Robotic Manipulation World Model with Action Tree and Visual Guidance0
Case Study: Fine-tuning Small Language Models for Accurate and Private CWE Detection in Python Code0
Instruction-Tuning Data Synthesis from Scratch via Web ReconstructionCode1
Evaluating Judges as Evaluators: The JETTS Benchmark of LLM-as-Judges as Test-Time Scaling EvaluatorsCode0
DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models0
Chinese-Vicuna: A Chinese Instruction-following Llama-based ModelCode7
Improving Instruct Models for Free: A Study on Partial Adaptation0
A Dual-Space Framework for General Knowledge Distillation of Large Language ModelsCode1
RealWebAssist: A Benchmark for Long-Horizon Web Assistance with Real-World UsersCode1
How Instruction and Reasoning Data shape Post-Training: Data Quality through the Lens of Layer-wise GradientsCode2
SIFT-50M: A Large-Scale Multilingual Dataset for Speech Instruction Fine-Tuning0
Playpen: An Environment for Exploring Learning Through Conversational InteractionCode0
Capybara-OMNI: An Efficient Paradigm for Building Omni-Modal Language Models0
MM-IFEngine: Towards Multimodal Instruction FollowingCode2
VideoExpert: Augmented LLM for Temporal-Sensitive Video Understanding0
Holistic Capability Preservation: Towards Compact Yet Comprehensive Reasoning Models0
Sculpting Subspaces: Constrained Full Fine-Tuning in LLMs for Continual LearningCode1
From 128K to 4M: Efficient Training of Ultra-Long Context Large Language Models0
Finding Fantastic Experts in MoEs: A Unified Study for Expert Dropping Strategies and Observations0
Separator Injection Attack: Uncovering Dialogue Biases in Large Language Models Caused by Role Separators0
Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language ModelsCode2
VARGPT-v1.1: Improve Visual Autoregressive Large Unified Model via Iterative Instruction Tuning and Reinforcement LearningCode3
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials DesignCode2
STING-BEE: Towards Vision-Language Model for Real-World X-ray Baggage Security InspectionCode1
The Hidden Space of Safety: Understanding Preference-Tuned LLMs in Multilingual context0
Show:102550
← PrevPage 3 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified