SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 201225 of 1135 papers

TitleStatusHype
CodeIF: Benchmarking the Instruction-Following Capabilities of Large Language Models for Code GenerationCode1
TextGames: Learning to Self-Play Text-Based Puzzle Games via Language Model ReasoningCode0
Rank1: Test-Time Compute for Reranking in Information RetrievalCode2
URO-Bench: A Comprehensive Benchmark for End-to-End Spoken Dialogue Models0
ATEB: Evaluating and Improving Advanced NLP Tasks for Text Embedding Models0
Order Matters: Investigate the Position Bias in Multi-constraint Instruction FollowingCode0
UrduLLaMA 1.0: Dataset Curation, Preprocessing, and Evaluation in Low-Resource Settings0
Capability Instruction Tuning: A New Paradigm for Dynamic LLM RoutingCode0
NatSGLD: A Dataset with Speech, Gesture, Logic, and Demonstration for Robot Learning in Natural Human-Robot InteractionCode0
Sequence-level Large Language Model Training with Contrastive Preference Optimization0
SOTOPIA-Ω: Dynamic Strategy Injection Learning and Social Instruction Following Evaluation for Social AgentsCode0
StructFlowBench: A Structured Flow Benchmark for Multi-turn Instruction FollowingCode1
OpenSearch-SQL: Enhancing Text-to-SQL with Dynamic Few-shot and Consistency Alignment0
Investigating Non-Transitivity in LLM-as-a-Judge0
Instruction Tuning on Public Government and Cultural Data for Low-Resource Language: a Case Study in Kazakh0
TESS 2: A Large-Scale Generalist Diffusion Language ModelCode2
MMTEB: Massive Multilingual Text Embedding Benchmark0
TALKPLAY: Multimodal Music Recommendation with Large Language Models0
Integrating Arithmetic Learning Improves Mathematical Reasoning in Smaller Models0
SAIF: A Sparse Autoencoder Framework for Interpreting and Steering Instruction Following of Language Models0
MuSC: Improving Complex Instruction Following with Multi-granularity Self-Contrastive TrainingCode0
RoleMRC: A Fine-Grained Composite Benchmark for Role-Playing and Instruction-FollowingCode0
Do we Really Need Visual Instructions? Towards Visual Instruction-Free Fine-tuning for Large Vision-Language Models0
Step-Audio: Unified Understanding and Generation in Intelligent Speech InteractionCode7
Learning to Keep a Promise: Scaling Language Model Decoding Parallelism with Learned Asynchronous Decoding0
Show:102550
← PrevPage 9 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified