SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 426450 of 1135 papers

TitleStatusHype
Bridging and Modeling Correlations in Pairwise Data for Direct Preference OptimizationCode1
Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space TransformationCode1
Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following ModelsCode1
Cross-model Control: Improving Multiple Large Language Models in One-time TrainingCode1
Speech-IFEval: Evaluating Instruction-Following and Quantifying Catastrophic Forgetting in Speech-Aware Language ModelsCode1
Lana: A Language-Capable Navigator for Instruction Following and GenerationCode1
Language-Conditioned Reinforcement Learning to Solve Misunderstandings with Action CorrectionsCode1
Kun: Answer Polishment for Chinese Self-Alignment with Instruction Back-TranslationCode1
BotChat: Evaluating LLMs' Capabilities of Having Multi-Turn DialoguesCode1
Do LLMs "know" internally when they follow instructions?Code1
A Dual-Space Framework for General Knowledge Distillation of Large Language ModelsCode1
An Emulator for Fine-Tuning Large Language Models using Small Language ModelsCode1
Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical StudyCode1
Jatmo: Prompt Injection Defense by Task-Specific FinetuningCode1
ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language TuningCode1
Diversify and Conquer: Diversity-Centric Data Selection with Iterative RefinementCode1
IRCoder: Intermediate Representations Make Language Models Robust Multilingual Code GeneratorsCode1
Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy InstructionsCode1
IDA-Bench: Evaluating LLMs on Interactive Guided Data AnalysisCode1
Is In-Context Learning Sufficient for Instruction Following in LLMs?Code1
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language ModelsCode1
DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models0
Distilling Internet-Scale Vision-Language Models into Embodied Agents0
Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning0
Beyond Instruction Following: Evaluating Inferential Rule Following of Large Language Models0
Show:102550
← PrevPage 18 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified