SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 150 of 1135 papers

TitleStatusHype
ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All ToolsCode14
Qwen2.5 Technical ReportCode13
Attentive Reasoning Queries: A Systematic Method for Optimizing Instruction-Following in Large Language ModelsCode11
Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model ScalingCode11
FunAudioLLM: Voice Understanding and Generation Foundation Models for Natural Interaction Between Humans and LLMsCode11
Let Them Talk: Audio-Driven Multi-Person Conversational Video GenerationCode7
HealthBench: Evaluating Large Language Models Towards Improved Human HealthCode7
Chinese-Vicuna: A Chinese Instruction-following Llama-based ModelCode7
Qwen2.5-Omni Technical ReportCode7
SimpleRL-Zoo: Investigating and Taming Zero Reinforcement Learning for Open Base Models in the WildCode7
Step-Audio: Unified Understanding and Generation in Intelligent Speech InteractionCode7
Large Language Diffusion ModelsCode7
Align Anything: Training All-Modality Models to Follow Instructions with Language FeedbackCode7
Qwen2-Audio Technical ReportCode7
Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference FeedbackCode7
EAGLE: Speculative Sampling Requires Rethinking Feature UncertaintyCode7
LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation DatasetCode7
LongLoRA: Efficient Fine-tuning of Long-Context Large Language ModelsCode6
Code Llama: Open Foundation Models for CodeCode6
L-Eval: Instituting Standardized Evaluation for Long Context Language ModelsCode6
QLoRA: Efficient Finetuning of Quantized LLMsCode6
Visual Instruction TuningCode6
CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model SocietyCode6
LeVo: High-Quality Song Generation with Multi-Preference AlignmentCode5
ShowUI: One Vision-Language-Action Model for GUI Visual AgentCode5
Aria: An Open Multimodal Native Mixture-of-Experts ModelCode5
Jamba-1.5: Hybrid Transformer-Mamba Models at ScaleCode5
LiveBench: A Challenging, Contamination-Limited LLM BenchmarkCode5
The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language ModelsCode5
LAB: Large-Scale Alignment for ChatBotsCode5
Ranni: Taming Text-to-Image Diffusion for Accurate Instruction FollowingCode5
Instruction-Following Evaluation for Large Language ModelsCode5
ImageBind-LLM: Multi-modality Instruction TuningCode5
MMBench: Is Your Multi-modal Model an All-around Player?Code5
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction ModelCode5
WizardLM: Empowering Large Language Models to Follow Complex InstructionsCode5
Towards Better Instruction Following Language Models for Chinese: Investigating the Impact of Training Data and EvaluationCode5
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init AttentionCode5
Self-Instruct: Aligning Language Models with Self-Generated InstructionsCode5
Kwai Keye-VL Technical ReportCode4
RewardBench 2: Advancing Reward Model EvaluationCode4
Parameter Efficient Instruction Tuning: An Empirical StudyCode4
FuseChat: Knowledge Fusion of Chat ModelsCode4
PromptFix: You Prompt and We Fix the PhotoCode4
A Survey on Vision-Language-Action Models for Embodied AICode4
SimPO: Simple Preference Optimization with a Reference-Free RewardCode4
RewardBench: Evaluating Reward Models for Language ModelingCode4
LLaMA Pro: Progressive LLaMA with Block ExpansionCode4
AgentBench: Evaluating LLMs as AgentsCode4
How Far Can Camels Go? Exploring the State of Instruction Tuning on Open ResourcesCode4
Show:102550
← PrevPage 1 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified