SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 150 of 1135 papers

TitleStatusHype
AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning0
How Many Instructions Can LLMs Follow at Once?0
DrafterBench: Benchmarking Large Language Models for Tasks Automation in Civil EngineeringCode2
Multilingual Multimodal Software Developer for Code Generation0
TuneShield: Mitigating Toxicity in Conversational AI while Fine-tuning on Untrusted Data0
Meta SecAlign: A Secure Foundation LLM Against Prompt Injection AttacksCode2
DeSTA2.5-Audio: Toward General-Purpose Large Audio Language Model with Self-Generated Cross-Modal AlignmentCode2
Kwai Keye-VL Technical ReportCode4
Bridging Offline and Online Reinforcement Learning for LLMs0
LLaVA-Pose: Enhancing Human Pose and Action Understanding via Keypoint-Integrated Instruction TuningCode0
Multi-lingual Functional Evaluation for Large Language Models0
Learning Instruction-Following Policies through Open-Ended Instruction Relabeling with Large Language Models0
JarvisArt: Liberating Human Artistic Creativity via an Intelligent Photo Retouching Agent0
InstructTTSEval: Benchmarking Complex Natural-Language Instruction Following in Text-to-Speech SystemsCode1
Treasure Hunt: Real-time Targeting of the Long Tail using Training-Time Markers0
Instruction Following by Boosting Attention of Large Language Models0
Mixture of Weight-shared Heterogeneous Group Attention Experts for Dynamic Token-wise KV Optimization0
LeVERB: Humanoid Whole-Body Control with Latent Vision-Language Instruction0
MM-R5: MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document RetrievalCode0
CMI-Bench: A Comprehensive Benchmark for Evaluating Music Instruction Following0
HalLoc: Token-level Localization of Hallucinations for Vision Language ModelsCode0
AC/DC: LLM-based Audio Comprehension via Dialogue Continuation0
Conversational Search: From Fundamentals to Frontiers in the LLM Era0
Magistral0
Discovering Hierarchical Latent Capabilities of Language Models via Causal Representation LearningCode0
Alzheimer's Dementia Detection Using Perplexity from Paired Large Language Models0
VerIF: Verification Engineering for Reinforcement Learning in Instruction FollowingCode2
LLaVA-c: Continual Improved Visual Instruction Tuning0
RHealthTwin: Towards Responsible and Multimodal Digital Twins for Personalized Well-being0
EIFBENCH: Extremely Complex Instruction Following Benchmark for Large Language ModelsCode0
LeVo: High-Quality Song Generation with Multi-Preference AlignmentCode5
Video Unlearning via Low-Rank Refusal Vector0
Aligning Text, Images, and 3D Structure Token-by-Token0
Adversarial Paraphrasing: A Universal Attack for Humanizing AI-Generated TextCode1
Audio-Aware Large Language Models as Judges for Speaking Styles0
Being Strong Progressively! Enhancing Knowledge Distillation of Large Language Models through a Curriculum Learning FrameworkCode0
RELIC: Evaluating Compositional Instruction Following via Language Recognition0
Unleashing Hour-Scale Video Training for Long Video-Language Understanding0
SeedEdit 3.0: Fast and High-Quality Generative Image Editing0
Identifying Reliable Evaluation Metrics for Scientific Text RevisionCode0
On the Mechanism of Reasoning Pattern Selection in Reinforcement Learning for Language Models0
Robust Anti-Backdoor Instruction Tuning in LVLMs0
RewardAnything: Generalizable Principle-Following Reward ModelsCode1
MASTER: Enhancing Large Language Model via Multi-Agent Simulated Teaching0
TIIF-Bench: How Does Your T2I Model Follow Your Instructions?0
Incentivizing Reasoning for Advanced Instruction-Following of Large Language ModelsCode1
RewardBench 2: Advancing Reward Model EvaluationCode4
MoDA: Modulation Adapter for Fine-Grained Visual Grounding in Instructional MLLMs0
FusionAudio-1.2M: Towards Fine-grained Audio Captioning with Multimodal Contextual FusionCode2
PersianMedQA: Language-Centric Evaluation of LLMs in the Persian Medical Domain0
Show:102550
← PrevPage 1 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified