SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 76100 of 1135 papers

TitleStatusHype
Caption Anything: Interactive Image Description with Diverse Multimodal ControlsCode3
Panda LLM: Training Data and Evaluation for Open-Sourced Chinese Instruction-Following Large Language ModelsCode3
MultiModal-GPT: A Vision and Language Model for Dialogue with HumansCode3
NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language ModelsCode3
FlashFace: Human Image Personalization with High-fidelity Identity PreservationCode3
OpenFedLLM: Training Large Language Models on Decentralized Private Data via Federated LearningCode3
DistiLLM: Towards Streamlined Distillation for Large Language ModelsCode3
Meta-Chunking: Learning Text Segmentation and Semantic Completion via Logical PerceptionCode3
LongAlign: A Recipe for Long Context Alignment of Large Language ModelsCode3
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM FinetuningCode3
LLaMA-Omni2: LLM-based Real-time Spoken Chatbot with Autoregressive Streaming Speech SynthesisCode3
How Can Recommender Systems Benefit from Large Language Models: A SurveyCode3
1.5-Pints Technical Report: Pretraining in Days, Not Months -- Your Language Model Thrives on Quality DataCode3
Learning to Decode Collaboratively with Multiple Language ModelsCode2
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free LunchCode2
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials DesignCode2
Benchmarking Complex Instruction-Following with Multiple Constraints CompositionCode2
MiniLLM: Knowledge Distillation of Large Language ModelsCode2
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-ExpertsCode2
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task ArithmeticCode2
Critique Fine-Tuning: Learning to Critique is More Effective than Learning to ImitateCode2
Large Language Model Instruction Following: A Survey of Progresses and ChallengesCode2
BayLing: Bridging Cross-lingual Alignment and Instruction Following through Interactive Translation for Large Language ModelsCode2
Aligning Modalities in Vision Large Language Models via Preference Fine-tuningCode2
A Critical Evaluation of AI Feedback for Aligning Large Language ModelsCode2
Show:102550
← PrevPage 4 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified