SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 731740 of 1135 papers

TitleStatusHype
The Comparative Trap: Pairwise Comparisons Amplifies Biased Preferences of LLM Evaluators0
Privately Aligning Language Models with Reinforcement Learning0
Prompt Baking0
Prompter: Utilizing Large Language Model Prompting for a Data Efficient Embodied Instruction Following0
Unleashing Hour-Scale Video Training for Long Video-Language Understanding0
PUB: A Pragmatics Understanding Benchmark for Assessing LLMs' Pragmatics Capabilities0
PUMGPT: A Large Vision-Language Model for Product Understanding0
BARE: Leveraging Base Language Models for Few-Shot Synthetic Data Generation0
Quantifying and Attributing the Hallucination of Large Language Models via Association Analysis0
Question: How do Large Language Models perform on the Question Answering tasks? Answer:0
Show:102550
← PrevPage 74 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified