SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 76100 of 1135 papers

TitleStatusHype
DistiLLM: Towards Streamlined Distillation for Large Language ModelsCode3
LongAlign: A Recipe for Long Context Alignment of Large Language ModelsCode3
Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-AlignmentCode3
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-ScalingCode3
Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language ModelsCode3
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data CompositionCode3
How Can Recommender Systems Benefit from Large Language Models: A SurveyCode3
AlpacaFarm: A Simulation Framework for Methods that Learn from Human FeedbackCode3
MultiModal-GPT: A Vision and Language Model for Dialogue with HumansCode3
X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign LanguagesCode3
Panda LLM: Training Data and Evaluation for Open-Sourced Chinese Instruction-Following Large Language ModelsCode3
Caption Anything: Interactive Image Description with Diverse Multimodal ControlsCode3
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP TasksCode3
DrafterBench: Benchmarking Large Language Models for Tasks Automation in Civil EngineeringCode2
DeSTA2.5-Audio: Toward General-Purpose Large Audio Language Model with Self-Generated Cross-Modal AlignmentCode2
Meta SecAlign: A Secure Foundation LLM Against Prompt Injection AttacksCode2
VerIF: Verification Engineering for Reinforcement Learning in Instruction FollowingCode2
FusionAudio-1.2M: Towards Fine-grained Audio Captioning with Multimodal Contextual FusionCode2
When Large Multimodal Models Confront Evolving Knowledge:Challenges and PathwaysCode2
How Instruction and Reasoning Data shape Post-Training: Data Quality through the Lens of Layer-wise GradientsCode2
MM-IFEngine: Towards Multimodal Instruction FollowingCode2
Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language ModelsCode2
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials DesignCode2
Rec-R1: Bridging Generative Large Language Models and User-Centric Recommendation Systems via Reinforcement LearningCode2
LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction TuningCode2
Show:102550
← PrevPage 4 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified