SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 981990 of 1135 papers

TitleStatusHype
Multi-Level Compositional Reasoning for Interactive Instruction FollowingCode0
Implicit Cross-Lingual Rewarding for Efficient Multilingual Preference AlignmentCode0
Preference-Guided Reflective Sampling for Aligning Language ModelsCode0
Improving Instruction Following in Language Models through Proxy-Based Uncertainty EstimationCode0
Unintended Impacts of LLM Alignment on Global RepresentationCode0
Third-Party Language Model Performance Prediction from InstructionCode0
CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent CooperationCode0
Pre-Learning Environment Representations for Data-Efficient Neural Instruction FollowingCode0
Aligning Large Language Models by On-Policy Self-JudgmentCode0
PrimeGuard: Safe and Helpful LLMs through Tuning-Free RoutingCode0
Show:102550
← PrevPage 99 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified