SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 76100 of 1135 papers

TitleStatusHype
MultiModal-GPT: A Vision and Language Model for Dialogue with HumansCode3
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM FinetuningCode3
FlashFace: Human Image Personalization with High-fidelity Identity PreservationCode3
EventRL: Enhancing Event Extraction with Outcome Supervision for Large Language ModelsCode3
NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language ModelsCode3
LongAlign: A Recipe for Long Context Alignment of Large Language ModelsCode3
DistiLLM: Towards Streamlined Distillation for Large Language ModelsCode3
1.5-Pints Technical Report: Pretraining in Days, Not Months -- Your Language Model Thrives on Quality DataCode3
How Can Recommender Systems Benefit from Large Language Models: A SurveyCode3
ComfyBench: Benchmarking LLM-based Agents in ComfyUI for Autonomously Designing Collaborative AI SystemsCode3
LLaMA-Omni2: LLM-based Real-time Spoken Chatbot with Autoregressive Streaming Speech SynthesisCode3
Meta-Chunking: Learning Text Segmentation and Semantic Completion via Logical PerceptionCode3
The Breeze 2 Herd of Models: Traditional Chinese LLMs Based on Llama with Vision-Aware and Function-Calling CapabilitiesCode3
Learning to Decode Collaboratively with Multiple Language ModelsCode2
ChartAssisstant: A Universal Chart Multimodal Language Model via Chart-to-Table Pre-training and Multitask Instruction TuningCode2
Chain-of-Spot: Interactive Reasoning Improves Large Vision-Language ModelsCode2
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task ArithmeticCode2
MiniLLM: Knowledge Distillation of Large Language ModelsCode2
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free LunchCode2
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-ExpertsCode2
Large Language Model Instruction Following: A Survey of Progresses and ChallengesCode2
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials DesignCode2
Aligning Modalities in Vision Large Language Models via Preference Fine-tuningCode2
A Critical Evaluation of AI Feedback for Aligning Large Language ModelsCode2
Critique Fine-Tuning: Learning to Critique is More Effective than Learning to ImitateCode2
Show:102550
← PrevPage 4 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified