SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 631640 of 1135 papers

TitleStatusHype
Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning0
Mosaic-IT: Free Compositional Data Augmentation Improves Instruction TuningCode1
Vikhr: Constructing a State-of-the-art Bilingual Open-Source Instruction-Following Large Language Model for RussianCode2
Disperse-Then-Merge: Pushing the Limits of Instruction Tuning via Alignment Tax ReductionCode0
RecGPT: Generative Pre-training for Text-based RecommendationCode1
Grounded 3D-LLM with Referent TokensCode2
Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning0
A safety realignment framework via subspace-oriented model fusion for large language modelsCode0
SpeechVerse: A Large-scale Generalizable Audio Language Model0
SpeechGuard: Exploring the Adversarial Robustness of Multimodal Large Language Models0
Show:102550
← PrevPage 64 of 114Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified