SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 476500 of 1135 papers

TitleStatusHype
M4CXR: Exploring Multi-task Potentials of Multi-modal Large Language Models for Chest X-ray Interpretation0
Entropic Distribution Matching in Supervised Fine-tuning of LLMs: Less Overfitting and Better Diversity0
SciLitLLM: How to Adapt LLMs for Scientific Literature UnderstandingCode2
Multi-Modal Instruction-Tuning Small-Scale Language-and-Vision Assistant for Semiconductor Electron Micrograph Analysis0
Parameter-Efficient Quantized Mixture-of-Experts Meets Vision-Language Instruction Tuning for Semiconductor Electron Micrograph Analysis0
Instruct-SkillMix: A Powerful Pipeline for LLM Instruction TuningCode0
Foundational Model for Electron Micrograph Analysis: Instruction-Tuning Small-Scale Language-and-Vision Assistant for Enterprise Adoption0
Preference Consistency Matters: Enhancing Preference Learning in Language Models with Automated Self-Curation of Training Corpora0
Jamba-1.5: Hybrid Transformer-Mamba Models at ScaleCode5
Preference-Guided Reflective Sampling for Aligning Language ModelsCode0
Kubrick: Multimodal Agent Collaborations for Synthetic Video Generation0
LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias of LLMsCode1
Ex3: Automatic Novel Writing by Extracting, Excelsior and ExpandingCode1
FuseChat: Knowledge Fusion of Chat ModelsCode4
Can Large Language Models Understand Symbolic Graphics Programs?0
Bridging and Modeling Correlations in Pairwise Data for Direct Preference OptimizationCode1
IFShip: Interpretable Fine-grained Ship Classification with Domain Knowledge-Enhanced Vision-Language ModelsCode0
CROME: Cross-Modal Adapters for Efficient Multimodal LLM0
Creating Arabic LLM Prompts at Scale0
Space-LLaVA: a Vision-Language Model Adapted to Extraterrestrial Applications0
Investigating Instruction Tuning Large Language Models on GraphsCode1
LLaVA-VSD: Large Language-and-Vision Assistant for Visual Spatial DescriptionCode0
EXAONE 3.0 7.8B Instruction Tuned Language Model0
Empirical Analysis of Large Vision-Language Models against Goal Hijacking via Visual Prompt Injection0
1.5-Pints Technical Report: Pretraining in Days, Not Months -- Your Language Model Thrives on Quality DataCode3
Show:102550
← PrevPage 20 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified