SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 10411050 of 1135 papers

TitleStatusHype
Evaluating Judges as Evaluators: The JETTS Benchmark of LLM-as-Judges as Test-Time Scaling EvaluatorsCode0
Sloth: scaling laws for LLM skills to predict multi-benchmark performance across familiesCode0
How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their VulnerabilitiesCode0
TextGames: Learning to Self-Play Text-Based Puzzle Games via Language Model ReasoningCode0
Token-Efficient Leverage Learning in Large Language ModelsCode0
Find the Intention of Instruction: Comprehensive Evaluation of Instruction Understanding for Large Language ModelsCode0
Generalization Analogies: A Testbed for Generalizing AI Oversight to Hard-To-Measure DomainsCode0
Opt-Out: Investigating Entity-Level Unlearning for Large Language Models via Optimal TransportCode0
TF1-EN-3M: Three Million Synthetic Moral Fables for Training Small, Open Language ModelsCode0
Evaluating the Instruction-following Abilities of Language Models using Knowledge TasksCode0
Show:102550
← PrevPage 105 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified