SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 601625 of 1135 papers

TitleStatusHype
Investigating Non-Transitivity in LLM-as-a-Judge0
Instruction Tuning on Public Government and Cultural Data for Low-Resource Language: a Case Study in Kazakh0
TALKPLAY: Multimodal Music Recommendation with Large Language Models0
MMTEB: Massive Multilingual Text Embedding Benchmark0
Integrating Arithmetic Learning Improves Mathematical Reasoning in Smaller Models0
RoleMRC: A Fine-Grained Composite Benchmark for Role-Playing and Instruction-FollowingCode0
Learning to Keep a Promise: Scaling Language Model Decoding Parallelism with Learned Asynchronous Decoding0
MuSC: Improving Complex Instruction Following with Multi-granularity Self-Contrastive TrainingCode0
SAIF: A Sparse Autoencoder Framework for Interpreting and Steering Instruction Following of Language Models0
Do we Really Need Visual Instructions? Towards Visual Instruction-Free Fine-tuning for Large Vision-Language Models0
CORDIAL: Can Multimodal Large Language Models Effectively Understand Coherence Relationships?Code0
Rewrite to Jailbreak: Discover Learnable and Transferable Implicit Harmfulness InstructionCode0
Cuckoo: An IE Free Rider Hatched by Massive Nutrition in LLM's NestCode0
E2LVLM:Evidence-Enhanced Large Vision-Language Model for Multimodal Out-of-Context Misinformation Detection0
Who Taught You That? Tracing Teachers in Model Distillation0
Temporal Representation Alignment: Successor Features Enable Emergent Compositionality in Robot Instruction Following0
Hypencoder: Hypernetworks for Information Retrieval0
Verifiable Format Control for Large Language Model Generations0
LLMs can be easily Confused by Instructional Distractions0
Training an LLM-as-a-Judge Model: Pipeline, Insights, and Practical Lessons0
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model0
Shuttle Between the Instructions and the Parameters of Large Language Models0
CoDe: Blockwise Control for Denoising Diffusion ModelsCode0
Learning Human Perception Dynamics for Informative Robot Communication0
BARE: Leveraging Base Language Models for Few-Shot Synthetic Data Generation0
Show:102550
← PrevPage 25 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified