SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 821830 of 1135 papers

TitleStatusHype
Self-supervised Quantized Representation for Seamlessly Integrating Knowledge Graphs with Large Language Models0
Separable Mixture of Low-Rank Adaptation for Continual Visual Instruction Tuning0
Separator Injection Attack: Uncovering Dialogue Biases in Large Language Models Caused by Role Separators0
Sequence-level Large Language Model Training with Contrastive Preference Optimization0
SeRA: Self-Reviewing and Alignment of Large Language Models using Implicit Reward Margins0
Visual Instruction Tuning towards General-Purpose Multimodal Model: A Survey0
SFR-RAG: Towards Contextually Faithful LLMs0
SFTMix: Elevating Language Model Instruction Tuning with Mixup Recipe0
A New Path: Scaling Vision-and-Language Navigation with Synthetic Instructions and Imitation Learning0
Analyzing Multilingual Competency of LLMs in Multi-Turn Instruction Following: A Case Study of Arabic0
Show:102550
← PrevPage 83 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified