SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 11011135 of 1135 papers

TitleStatusHype
Rewrite to Jailbreak: Discover Learnable and Transferable Implicit Harmfulness InstructionCode0
MLAN: Language-Based Instruction Tuning Improves Zero-Shot Generalization of Multimodal Large Language ModelsCode0
Guiding Policies with Language via Meta-LearningCode0
Learning To Follow Directions in Street ViewCode0
Learning to Follow Instructions in Text-Based GamesCode0
Self-Powered LLM Modality Expansion for Large Speech-Text ModelsCode0
Towards Interactive Deepfake AnalysisCode0
Capability Instruction Tuning: A New Paradigm for Dynamic LLM RoutingCode0
PediaBench: A Comprehensive Chinese Pediatric Dataset for Benchmarking Large Language ModelsCode0
Learning to Recombine and Resample Data for Compositional GeneralizationCode0
EIFBENCH: Extremely Complex Instruction Following Benchmark for Large Language ModelsCode0
MM-R5: MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document RetrievalCode0
AbsInstruct: Eliciting Abstraction Ability from LLMs through Explanation Tuning with Plausibility EstimationCode0
Does Alignment Tuning Really Break LLMs' Internal Confidence?Code0
Continual Learning for Instruction Following from Realtime FeedbackCode0
Phased Instruction Fine-Tuning for Large Language ModelsCode0
Generative Visual Instruction TuningCode0
Semantic Graphs for Syntactic Simplification: A Revisit from the Age of LLMCode0
Stay Focused: Problem Drift in Multi-Agent DebateCode0
Style Outweighs Substance: Failure Modes of LLM Judges in Alignment BenchmarkingCode0
DialMAT: Dialogue-Enabled Transformer with Moment-Based Adversarial TrainingCode0
Robust Multi-Objective Controlled Decoding of Large Language ModelsCode0
Monolingual or Multilingual Instruction Tuning: Which Makes a Better AlpacaCode0
Compositionality as Lexical SymmetryCode0
Playpen: An Environment for Exploring Learning Through Conversational InteractionCode0
RoleMRC: A Fine-Grained Composite Benchmark for Role-Playing and Instruction-FollowingCode0
LIFBench: Evaluating the Instruction Following Performance and Stability of Large Language Models in Long-Context ScenariosCode0
LIFEBench: Evaluating Length Instruction Following in Large Language ModelsCode0
Disperse-Then-Merge: Pushing the Limits of Instruction Tuning via Alignment Tax ReductionCode0
Being Strong Progressively! Enhancing Knowledge Distillation of Large Language Models through a Curriculum Learning FrameworkCode0
MpoxVLM: A Vision-Language Model for Diagnosing Skin Lesions from Mpox Virus InfectionCode0
Compositional Image Retrieval via Instruction-Aware Contrastive LearningCode0
Policy Improvement using Language Feedback ModelsCode0
POROver: Improving Safety and Reducing Overrefusal in Large Language Models with Overgeneration and Preference OptimizationCode0
Empowering Cross-lingual Abilities of Instruction-tuned Large Language Models by Translation-following demonstrationsCode0
Show:102550
← PrevPage 23 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified