SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 201250 of 1135 papers

TitleStatusHype
Ground-level Viewpoint Vision-and-Language Navigation in Continuous Environments0
TextGames: Learning to Self-Play Text-Based Puzzle Games via Language Model ReasoningCode0
Rank1: Test-Time Compute for Reranking in Information RetrievalCode2
URO-Bench: A Comprehensive Benchmark for End-to-End Spoken Dialogue Models0
Order Matters: Investigate the Position Bias in Multi-constraint Instruction FollowingCode0
ATEB: Evaluating and Improving Advanced NLP Tasks for Text Embedding Models0
UrduLLaMA 1.0: Dataset Curation, Preprocessing, and Evaluation in Low-Resource Settings0
Capability Instruction Tuning: A New Paradigm for Dynamic LLM RoutingCode0
Sequence-level Large Language Model Training with Contrastive Preference Optimization0
NatSGLD: A Dataset with Speech, Gesture, Logic, and Demonstration for Robot Learning in Natural Human-Robot InteractionCode0
SOTOPIA-Ω: Dynamic Strategy Injection Learning and Social Instruction Following Evaluation for Social AgentsCode0
StructFlowBench: A Structured Flow Benchmark for Multi-turn Instruction FollowingCode1
OpenSearch-SQL: Enhancing Text-to-SQL with Dynamic Few-shot and Consistency Alignment0
Investigating Non-Transitivity in LLM-as-a-Judge0
Instruction Tuning on Public Government and Cultural Data for Low-Resource Language: a Case Study in Kazakh0
TESS 2: A Large-Scale Generalist Diffusion Language ModelCode2
MMTEB: Massive Multilingual Text Embedding BenchmarkCode0
TALKPLAY: Multimodal Music Recommendation with Large Language Models0
Integrating Arithmetic Learning Improves Mathematical Reasoning in Smaller Models0
Do we Really Need Visual Instructions? Towards Visual Instruction-Free Fine-tuning for Large Vision-Language Models0
RoleMRC: A Fine-Grained Composite Benchmark for Role-Playing and Instruction-FollowingCode0
SAIF: A Sparse Autoencoder Framework for Interpreting and Steering Instruction Following of Language Models0
MuSC: Improving Complex Instruction Following with Multi-granularity Self-Contrastive TrainingCode0
Learning to Keep a Promise: Scaling Language Model Decoding Parallelism with Learned Asynchronous Decoding0
Step-Audio: Unified Understanding and Generation in Intelligent Speech InteractionCode7
Cuckoo: An IE Free Rider Hatched by Massive Nutrition in LLM's NestCode0
CORDIAL: Can Multimodal Large Language Models Effectively Understand Coherence Relationships?Code0
Enhancing Cross-Tokenizer Knowledge Distillation with Contextual Dynamical MappingCode1
Rewrite to Jailbreak: Discover Learnable and Transferable Implicit Harmfulness InstructionCode0
Large Language Diffusion ModelsCode7
E2LVLM:Evidence-Enhanced Large Vision-Language Model for Multimodal Out-of-Context Misinformation Detection0
IHEval: Evaluating Language Models on Following the Instruction HierarchyCode1
BenchMAX: A Comprehensive Multilingual Evaluation Suite for Large Language ModelsCode1
Who Taught You That? Tracing Teachers in Model Distillation0
Temporal Representation Alignment: Successor Features Enable Emergent Compositionality in Robot Instruction Following0
Hypencoder: Hypernetworks for Information Retrieval0
M-IFEval: Multilingual Instruction-Following EvaluationCode1
Verifiable Format Control for Large Language Model Generations0
UltraIF: Advancing Instruction Following from the WildCode1
LLMs can be easily Confused by Instructional Distractions0
Training an LLM-as-a-Judge Model: Pipeline, Insights, and Practical Lessons0
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model0
Shuttle Between the Instructions and the Parameters of Large Language Models0
CoDe: Blockwise Control for Denoising Diffusion ModelsCode0
BARE: Leveraging Base Language Models for Few-Shot Synthetic Data Generation0
Learning Human Perception Dynamics for Informative Robot Communication0
Disentangling Length Bias In Preference Learning Via Response-Conditioned Modeling0
ReFoRCE: A Text-to-SQL Agent with Self-Refinement, Format Restriction, and Column Exploration0
mFollowIR: a Multilingual Benchmark for Instruction Following in RetrievalCode2
Rethinking Bottlenecks in Safety Fine-Tuning of Vision Language Models0
Show:102550
← PrevPage 5 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified