SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 151200 of 1135 papers

TitleStatusHype
GSCo: Towards Generalizable AI in Medicine via Generalist-Specialist CollaborationCode2
Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative InstructionsCode2
MentaLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language ModelsCode2
Meta SecAlign: A Secure Foundation LLM Against Prompt Injection AttacksCode2
PhoGPT: Generative Pre-training for VietnameseCode2
DeSTA2: Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning DataCode2
From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction TuningCode2
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and ActionCode2
Long-Context Language Modeling with Parallel Context EncodingCode2
DeSTA2.5-Audio: Toward General-Purpose Large Audio Language Model with Self-Generated Cross-Modal AlignmentCode2
GraphWiz: An Instruction-Following Language Model for Graph ProblemsCode2
ChartAssisstant: A Universal Chart Multimodal Language Model via Chart-to-Table Pre-training and Multitask Instruction TuningCode2
Chain-of-Spot: Interactive Reasoning Improves Large Vision-Language ModelsCode2
DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMsCode2
Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward SystemsCode2
Diffusion-ES: Gradient-free Planning with Diffusion for Autonomous Driving and Zero-Shot Instruction FollowingCode2
GenAI Arena: An Open Evaluation Platform for Generative ModelsCode2
GraphTranslator: Aligning Graph Model to Large Language Model for Open-ended TasksCode2
LMDrive: Closed-Loop End-to-End Driving with Large Language ModelsCode2
LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction TuningCode2
GeoChat: Grounded Large Vision-Language Model for Remote SensingCode2
GPT4Tools: Teaching Large Language Model to Use Tools via Self-instructionCode2
LLaVA-Plus: Learning to Use Tools for Creating Multimodal AgentsCode2
GeoGround: A Unified Large Vision-Language Model for Remote Sensing Visual GroundingCode2
From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large Language ModelsCode2
LLaSM: Large Language and Speech ModelCode2
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image UnderstandingCode2
Direct Preference Optimization of Video Large Multimodal Models from Language Model RewardCode2
Archon: An Architecture Search Framework for Inference-Time TechniquesCode2
Dual-Space Knowledge Distillation for Large Language ModelsCode2
Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You WantCode2
CoIN: A Benchmark of Continual Instruction tuNing for Multimodel Large Language ModelCode2
LLark: A Multimodal Instruction-Following Language Model for MusicCode2
LLM-RG4: Flexible and Factual Radiology Report Generation across Diverse Input ContextsCode2
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language ModelsCode2
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-ExpertsCode2
LHRS-Bot-Nova: Improved Multimodal Large Language Model for Remote Sensing Vision-Language InterpretationCode2
EditWorld: Simulating World Dynamics for Instruction-Following Image EditingCode2
Lion: Adversarial Distillation of Proprietary Large Language ModelsCode2
F-LMM: Grounding Frozen Large Multimodal ModelsCode2
LITA: Language Instructed Temporal-Localization AssistantCode2
Learning to Decode Collaboratively with Multiple Language ModelsCode2
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials DesignCode2
BuboGPT: Enabling Visual Grounding in Multi-Modal LLMsCode2
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task ArithmeticCode2
Critique Fine-Tuning: Learning to Critique is More Effective than Learning to ImitateCode2
MiniLLM: Knowledge Distillation of Large Language ModelsCode2
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free LunchCode2
BLSP-Emo: Towards Empathetic Large Speech-Language ModelsCode2
Large Language Model Instruction Following: A Survey of Progresses and ChallengesCode2
Show:102550
← PrevPage 4 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified