SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 151200 of 1135 papers

TitleStatusHype
Learning to Decode Collaboratively with Multiple Language ModelsCode2
AutoDefense: Multi-Agent LLM Defense against Jailbreak AttacksCode2
Long-Context Language Modeling with Parallel Context EncodingCode2
GraphWiz: An Instruction-Following Language Model for Graph ProblemsCode2
Self-Distillation Bridges Distribution Gap in Language Model Fine-TuningCode2
A Critical Evaluation of AI Feedback for Aligning Large Language ModelsCode2
The Revolution of Multimodal Large Language Models: A SurveyCode2
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task ArithmeticCode2
Aligning Modalities in Vision Large Language Models via Preference Fine-tuningCode2
AIR-Bench: Benchmarking Large Audio-Language Models via Generative ComprehensionCode2
GraphTranslator: Aligning Graph Model to Large Language Model for Open-ended TasksCode2
Diffusion-ES: Gradient-free Planning with Diffusion for Autonomous Driving and Zero-Shot Instruction FollowingCode2
Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language ModelsCode2
EarthGPT: A Universal Multi-modal Large Language Model for Multi-sensor Image Comprehension in Remote Sensing DomainCode2
Towards 3D Molecule-Text Interpretation in Language ModelsCode2
SkyEyeGPT: Unifying Remote Sensing Vision-Language Tasks via Instruction Tuning with Large Language ModelCode2
EmoLLMs: A Series of Emotional Large Language Models and Annotation Tools for Comprehensive Affective AnalysisCode2
InFoBench: Evaluating Instruction Following Ability in Large Language ModelsCode2
ChartAssisstant: A Universal Chart Multimodal Language Model via Chart-to-Table Pre-training and Multitask Instruction TuningCode2
MAPLM: A Real-World Large-Scale Vision-Language Benchmark for Map and Traffic Scene UnderstandingCode2
Aurora:Activating Chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-TuningCode2
T-Eval: Evaluating the Tool Utilization Capability of Large Language Models Step by StepCode2
LMDrive: Closed-Loop End-to-End Driving with Large Language ModelsCode2
TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video UnderstandingCode2
GeoChat: Grounded Large Vision-Language Model for Remote SensingCode2
To See is to Believe: Prompting GPT-4V for Better Visual Instruction TuningCode2
LLaVA-Plus: Learning to Use Tools for Creating Multimodal AgentsCode2
PhoGPT: Generative Pre-training for VietnameseCode2
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free LunchCode2
LLark: A Multimodal Instruction-Following Language Model for MusicCode2
Reformulating Vision-Language Foundation Models and Datasets Towards Universal Multimodal AssistantsCode2
ModuLoRA: Finetuning 2-Bit LLMs on Consumer GPUs by Integrating with Modular QuantizersCode2
MentaLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language ModelsCode2
Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction FollowingCode2
LLaSM: Large Language and Speech ModelCode2
From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction TuningCode2
EcomGPT: Instruction-tuning Large Language Models with Chain-of-Task Tasks for E-commerceCode2
#InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language ModelsCode2
Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative InstructionsCode2
Zhongjing: Enhancing the Chinese Medical Capabilities of Large Language Model through Expert Feedback and Real-world Multi-turn DialogueCode2
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill SetsCode2
BuboGPT: Enabling Visual Grounding in Multi-Modal LLMsCode2
What Matters in Training a GPT4-Style Language Model with Multimodal Inputs?Code2
LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image UnderstandingCode2
BayLing: Bridging Cross-lingual Alignment and Instruction Following through Interactive Translation for Large Language ModelsCode2
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language ModelsCode2
MiniLLM: Knowledge Distillation of Large Language ModelsCode2
Valley: Video Assistant with Large Language model Enhanced abilitYCode2
STEVE-1: A Generative Model for Text-to-Behavior in MinecraftCode2
GPT4Tools: Teaching Large Language Model to Use Tools via Self-instructionCode2
Show:102550
← PrevPage 4 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified