SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 751775 of 1135 papers

TitleStatusHype
On Instruction-Finetuning Neural Machine Translation Models0
RevisEval: Improving LLM-as-a-Judge via Response-Adapted References0
Superficial Safety Alignment Hypothesis0
SFTMix: Elevating Language Model Instruction Tuning with Mixup Recipe0
CS4: Measuring the Creativity of Large Language Models Automatically by Controlling the Number of Story-Writing ConstraintsCode0
TICKing All the Boxes: Generated Checklists Improve LLM Evaluation and Generation0
CommonIT: Commonality-Aware Instruction Tuning for Large Language Models via Data PartitionsCode0
SAG: Style-Aligned Article Generation via Model Collaboration0
Self-Powered LLM Modality Expansion for Large Speech-Text ModelsCode0
Video Instruction Tuning With Synthetic Data0
LoGra-Med: Long Context Multi-Graph Alignment for Medical Vision-Language Model0
LLaVA-Critic: Learning to Evaluate Multimodal Models0
Better Instruction-Following Through Minimum Bayes Risk0
The Perfect Blend: Redefining RLHF with Mixture of Judges0
Revisiting the Superficial Alignment Hypothesis0
Align^2LLaVA: Cascaded Human and Large Language Model Preference Alignment for Multi-modal Instruction CurationCode0
MMMT-IF: A Challenging Multimodal Multi-Turn Instruction Following Benchmark0
Inference-Time Language Model Alignment via Integrated Value Guidance0
EAGLE: Towards Efficient Arbitrary Referring Visual Prompts Comprehension for Multimodal Large Language Models0
Mitigating the Bias of Large Language Model EvaluationCode0
FMDLlama: Financial Misinformation Detection based on Large Language ModelsCode0
Style Outweighs Substance: Failure Modes of LLM Judges in Alignment BenchmarkingCode0
Repairs in a Block World: A New Benchmark for Handling User Corrections with Multi-Modal Language ModelsCode0
Eliciting Instruction-tuned Code Language Models' Capabilities to Utilize Auxiliary Function for Code Generation0
CamelEval: Advancing Culturally Aligned Arabic Language Models and Benchmarks0
Show:102550
← PrevPage 31 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified