SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 150 of 1135 papers

TitleStatusHype
ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All ToolsCode13
Qwen2.5 Technical ReportCode13
FunAudioLLM: Voice Understanding and Generation Foundation Models for Natural Interaction Between Humans and LLMsCode11
Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model ScalingCode11
Attentive Reasoning Queries: A Systematic Method for Optimizing Instruction-Following in Large Language ModelsCode11
SimpleRL-Zoo: Investigating and Taming Zero Reinforcement Learning for Open Base Models in the WildCode7
Step-Audio: Unified Understanding and Generation in Intelligent Speech InteractionCode7
HealthBench: Evaluating Large Language Models Towards Improved Human HealthCode7
Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference FeedbackCode7
Chinese-Vicuna: A Chinese Instruction-following Llama-based ModelCode7
LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation DatasetCode7
EAGLE: Speculative Sampling Requires Rethinking Feature UncertaintyCode7
Align Anything: Training All-Modality Models to Follow Instructions with Language FeedbackCode7
Qwen2-Audio Technical ReportCode7
Let Them Talk: Audio-Driven Multi-Person Conversational Video GenerationCode7
Large Language Diffusion ModelsCode7
Qwen2.5-Omni Technical ReportCode7
LongLoRA: Efficient Fine-tuning of Long-Context Large Language ModelsCode6
L-Eval: Instituting Standardized Evaluation for Long Context Language ModelsCode6
QLoRA: Efficient Finetuning of Quantized LLMsCode6
CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model SocietyCode6
Code Llama: Open Foundation Models for CodeCode6
Visual Instruction TuningCode6
ShowUI: One Vision-Language-Action Model for GUI Visual AgentCode5
LiveBench: A Challenging, Contamination-Limited LLM BenchmarkCode5
LeVo: High-Quality Song Generation with Multi-Preference AlignmentCode5
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init AttentionCode5
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction ModelCode5
LAB: Large-Scale Alignment for ChatBotsCode5
Self-Instruct: Aligning Language Models with Self-Generated InstructionsCode5
Instruction-Following Evaluation for Large Language ModelsCode5
ImageBind-LLM: Multi-modality Instruction TuningCode5
Jamba-1.5: Hybrid Transformer-Mamba Models at ScaleCode5
Ranni: Taming Text-to-Image Diffusion for Accurate Instruction FollowingCode5
WizardLM: Empowering Large Language Models to Follow Complex InstructionsCode5
Aria: An Open Multimodal Native Mixture-of-Experts ModelCode5
Towards Better Instruction Following Language Models for Chinese: Investigating the Impact of Training Data and EvaluationCode5
MMBench: Is Your Multi-modal Model an All-around Player?Code5
The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language ModelsCode5
LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One DayCode4
LLaMA Pro: Progressive LLaMA with Block ExpansionCode4
SimPO: Simple Preference Optimization with a Reference-Free RewardCode4
A Survey on Vision-Language-Action Models for Embodied AICode4
Instruction Tuning with GPT-4Code4
How Far Can Camels Go? Exploring the State of Instruction Tuning on Open ResourcesCode4
RewardBench 2: Advancing Reward Model EvaluationCode4
FuseChat: Knowledge Fusion of Chat ModelsCode4
AgentBench: Evaluating LLMs as AgentsCode4
Otter: A Multi-Modal Model with In-Context Instruction TuningCode4
Parameter Efficient Instruction Tuning: An Empirical StudyCode4
Show:102550
← PrevPage 1 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified