SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 101110 of 1135 papers

TitleStatusHype
Seedream 2.0: A Native Chinese-English Bilingual Image Generation Foundation ModelCode2
DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMsCode2
RouterEval: A Comprehensive Benchmark for Routing LLMs to Explore Model-level Scaling Up in LLMsCode2
Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward SystemsCode2
Rank1: Test-Time Compute for Reranking in Information RetrievalCode2
TESS 2: A Large-Scale Generalist Diffusion Language ModelCode2
mFollowIR: a Multilingual Benchmark for Instruction Following in RetrievalCode2
MultiChallenge: A Realistic Multi-Turn Conversation Evaluation Benchmark Challenging to Frontier LLMsCode2
Critique Fine-Tuning: Learning to Critique is More Effective than Learning to ImitateCode2
Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual FeedbackCode2
Show:102550
← PrevPage 11 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified