SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 4150 of 1135 papers

TitleStatusHype
On the Mechanism of Reasoning Pattern Selection in Reinforcement Learning for Language Models0
Robust Anti-Backdoor Instruction Tuning in LVLMs0
RewardAnything: Generalizable Principle-Following Reward ModelsCode1
MASTER: Enhancing Large Language Model via Multi-Agent Simulated Teaching0
TIIF-Bench: How Does Your T2I Model Follow Your Instructions?0
Incentivizing Reasoning for Advanced Instruction-Following of Large Language ModelsCode1
RewardBench 2: Advancing Reward Model EvaluationCode4
MoDA: Modulation Adapter for Fine-Grained Visual Grounding in Instructional MLLMs0
FusionAudio-1.2M: Towards Fine-grained Audio Captioning with Multimodal Contextual FusionCode2
PersianMedQA: Language-Centric Evaluation of LLMs in the Persian Medical Domain0
Show:102550
← PrevPage 5 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified