SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 451460 of 1135 papers

TitleStatusHype
Adaptive Detoxification: Safeguarding General Capabilities of LLMs through Toxicity-Aware Knowledge Editing0
Disentangling Length Bias In Preference Learning Via Response-Conditioned Modeling0
DiscreteSLU: A Large Language Model with Self-Supervised Discrete Speech Units for Spoken Language Understanding0
AC/DC: LLM-based Audio Comprehension via Dialogue Continuation0
Better Instruction-Following Through Minimum Bayes Risk0
Efficient Prompt Optimization Through the Lens of Best Arm Identification0
Diffusion vs. Autoregressive Language Models: A Text Embedding Perspective0
Iterative Value Function Optimization for Guided Decoding0
Differential Information: An Information-Theoretic Perspective on Preference Optimization0
DiffChat: Learning to Chat with Text-to-Image Synthesis Models for Interactive Image Creation0
Show:102550
← PrevPage 46 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified