SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 626650 of 1135 papers

TitleStatusHype
Disentangling Length Bias In Preference Learning Via Response-Conditioned Modeling0
ReFoRCE: A Text-to-SQL Agent with Self-Refinement, Format Restriction, and Column Exploration0
Self-supervised Quantized Representation for Seamlessly Integrating Knowledge Graphs with Large Language Models0
Rethinking Bottlenecks in Safety Fine-Tuning of Vision Language Models0
3D-MoE: A Mixture-of-Experts Multi-modal LLM for 3D Vision and Pose Diffusion via Rectified Flow0
How well can LLMs Grade Essays in Arabic?0
Advancing Mathematical Reasoning in Language Models: The Impact of Problem-Solving Data, Data Synthesis Methods, and Training Stages0
Compositional Instruction Following with Language Models and Reinforcement Learning0
InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model0
Zero-shot and Few-shot Learning with Instruction-following LLMs for Claim Matching in Automated Fact-checking0
BAP v2: An Enhanced Task Framework for Instruction Following in Minecraft Dialogues0
DNA 1.0 Technical Report0
Iterative Label Refinement Matters More than Preference Optimization under Weak SupervisionCode0
Audio-CoT: Exploring Chain-of-Thought Reasoning in Large Audio Language Model0
A Comprehensive Evaluation of Large Language Models on Mental Illnesses in Arabic Context0
MinMo: A Multimodal Large Language Model for Seamless Voice Interaction0
Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models0
Scalable Vision Language Model Training via High Quality Data Curation0
LongViTU: Instruction Tuning for Long-Form Video Understanding0
Language and Planning in Robotic Navigation: A Multilingual Evaluation of State-of-the-Art Models0
DPO Kernels: A Semantically-Aware, Kernel-Enhanced, and Divergence-Rich Paradigm for Direct Preference Optimization0
Instruction-Following Pruning for Large Language Models0
Towards Interactive Deepfake AnalysisCode0
ProgCo: Program Helps Self-Correction of Large Language ModelsCode0
MIMO: A Medical Vision Language Model with Visual Referring Multimodal Input and Pixel Grounding Multimodal OutputCode0
Show:102550
← PrevPage 26 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified