SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 451475 of 1135 papers

TitleStatusHype
RefuteBench: Evaluating Refuting Instruction-Following for Large Language ModelsCode0
Analysis of Language Change in Collaborative Instruction FollowingCode0
Discovering Hierarchical Latent Capabilities of Language Models via Causal Representation LearningCode0
DialMAT: Dialogue-Enabled Transformer with Moment-Based Adversarial TrainingCode0
Quantifying Self-diagnostic Atomic Knowledge in Chinese Medical Foundation Model: A Computational AnalysisCode0
PrimeGuard: Safe and Helpful LLMs through Tuning-Free RoutingCode0
Pre-Learning Environment Representations for Data-Efficient Neural Instruction FollowingCode0
ProgCo: Program Helps Self-Correction of Large Language ModelsCode0
Improving Instruction Following in Language Models through Proxy-Based Uncertainty EstimationCode0
Implicit Cross-Lingual Rewarding for Efficient Multilingual Preference AlignmentCode0
Being Strong Progressively! Enhancing Knowledge Distillation of Large Language Models through a Curriculum Learning FrameworkCode0
IFShip: Interpretable Fine-grained Ship Classification with Domain Knowledge-Enhanced Vision-Language ModelsCode0
Bayesian Calibration of Win Rate Estimation with LLM EvaluatorsCode0
Policy Improvement using Language Feedback ModelsCode0
Identifying Reliable Evaluation Metrics for Scientific Text RevisionCode0
POROver: Improving Safety and Reducing Overrefusal in Large Language Models with Overgeneration and Preference OptimizationCode0
Playpen: An Environment for Exploring Learning Through Conversational InteractionCode0
Phased Instruction Fine-Tuning for Large Language ModelsCode0
PediaBench: A Comprehensive Chinese Pediatric Dataset for Benchmarking Large Language ModelsCode0
Preference-Guided Reflective Sampling for Aligning Language ModelsCode0
Rate, Explain and Cite (REC): Enhanced Explanation and Attribution in Automatic Evaluation by Large Language ModelsCode0
Optimal Transport-Based Token Weighting scheme for Enhanced Preference OptimizationCode0
Order Matters: Investigate the Position Bias in Multi-constraint Instruction FollowingCode0
HREF: Human Response-Guided Evaluation of Instruction Following in Language ModelsCode0
How You Prompt Matters! Even Task-Oriented Constraints in Instructions Affect LLM-Generated Text DetectionCode0
Show:102550
← PrevPage 19 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified