SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 601650 of 1135 papers

TitleStatusHype
Investigating Non-Transitivity in LLM-as-a-Judge0
Instruction Tuning on Public Government and Cultural Data for Low-Resource Language: a Case Study in Kazakh0
TALKPLAY: Multimodal Music Recommendation with Large Language Models0
MMTEB: Massive Multilingual Text Embedding Benchmark0
Integrating Arithmetic Learning Improves Mathematical Reasoning in Smaller Models0
RoleMRC: A Fine-Grained Composite Benchmark for Role-Playing and Instruction-FollowingCode0
Learning to Keep a Promise: Scaling Language Model Decoding Parallelism with Learned Asynchronous Decoding0
MuSC: Improving Complex Instruction Following with Multi-granularity Self-Contrastive TrainingCode0
SAIF: A Sparse Autoencoder Framework for Interpreting and Steering Instruction Following of Language Models0
Do we Really Need Visual Instructions? Towards Visual Instruction-Free Fine-tuning for Large Vision-Language Models0
CORDIAL: Can Multimodal Large Language Models Effectively Understand Coherence Relationships?Code0
Rewrite to Jailbreak: Discover Learnable and Transferable Implicit Harmfulness InstructionCode0
Cuckoo: An IE Free Rider Hatched by Massive Nutrition in LLM's NestCode0
E2LVLM:Evidence-Enhanced Large Vision-Language Model for Multimodal Out-of-Context Misinformation Detection0
Who Taught You That? Tracing Teachers in Model Distillation0
Temporal Representation Alignment: Successor Features Enable Emergent Compositionality in Robot Instruction Following0
Hypencoder: Hypernetworks for Information Retrieval0
Verifiable Format Control for Large Language Model Generations0
LLMs can be easily Confused by Instructional Distractions0
Training an LLM-as-a-Judge Model: Pipeline, Insights, and Practical Lessons0
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model0
Shuttle Between the Instructions and the Parameters of Large Language Models0
CoDe: Blockwise Control for Denoising Diffusion ModelsCode0
Learning Human Perception Dynamics for Informative Robot Communication0
BARE: Leveraging Base Language Models for Few-Shot Synthetic Data Generation0
Disentangling Length Bias In Preference Learning Via Response-Conditioned Modeling0
ReFoRCE: A Text-to-SQL Agent with Self-Refinement, Format Restriction, and Column Exploration0
Self-supervised Quantized Representation for Seamlessly Integrating Knowledge Graphs with Large Language Models0
Rethinking Bottlenecks in Safety Fine-Tuning of Vision Language Models0
3D-MoE: A Mixture-of-Experts Multi-modal LLM for 3D Vision and Pose Diffusion via Rectified Flow0
How well can LLMs Grade Essays in Arabic?0
Advancing Mathematical Reasoning in Language Models: The Impact of Problem-Solving Data, Data Synthesis Methods, and Training Stages0
Compositional Instruction Following with Language Models and Reinforcement Learning0
InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model0
Zero-shot and Few-shot Learning with Instruction-following LLMs for Claim Matching in Automated Fact-checking0
BAP v2: An Enhanced Task Framework for Instruction Following in Minecraft Dialogues0
DNA 1.0 Technical Report0
Iterative Label Refinement Matters More than Preference Optimization under Weak SupervisionCode0
Audio-CoT: Exploring Chain-of-Thought Reasoning in Large Audio Language Model0
A Comprehensive Evaluation of Large Language Models on Mental Illnesses in Arabic Context0
MinMo: A Multimodal Large Language Model for Seamless Voice Interaction0
Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models0
Scalable Vision Language Model Training via High Quality Data Curation0
LongViTU: Instruction Tuning for Long-Form Video Understanding0
Language and Planning in Robotic Navigation: A Multilingual Evaluation of State-of-the-Art Models0
DPO Kernels: A Semantically-Aware, Kernel-Enhanced, and Divergence-Rich Paradigm for Direct Preference Optimization0
Instruction-Following Pruning for Large Language Models0
Towards Interactive Deepfake AnalysisCode0
ProgCo: Program Helps Self-Correction of Large Language ModelsCode0
MIMO: A Medical Vision Language Model with Visual Referring Multimodal Input and Pixel Grounding Multimodal OutputCode0
Show:102550
← PrevPage 13 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified