SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 51100 of 1135 papers

TitleStatusHype
PromptFix: You Prompt and We Fix the PhotoCode4
LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One DayCode4
LLaMA Pro: Progressive LLaMA with Block ExpansionCode4
FuseChat: Knowledge Fusion of Chat ModelsCode4
Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-AlignmentCode3
LaViDa: A Large Diffusion Language Model for Multimodal UnderstandingCode3
AudioBench: A Universal Benchmark for Audio Large Language ModelsCode3
AlpacaFarm: A Simulation Framework for Methods that Learn from Human FeedbackCode3
Caption Anything: Interactive Image Description with Diverse Multimodal ControlsCode3
VARGPT: Unified Understanding and Generation in a Visual Autoregressive Multimodal Large Language ModelCode3
VARGPT-v1.1: Improve Visual Autoregressive Large Unified Model via Iterative Instruction Tuning and Reinforcement LearningCode3
IFEval-Audio: Benchmarking Instruction-Following Capability in Audio-based Large Language ModelsCode3
X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign LanguagesCode3
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-ScalingCode3
SongComposer: A Large Language Model for Lyric and Melody Generation in Song CompositionCode3
Self-play with Execution Feedback: Improving Instruction-following Capabilities of Large Language ModelsCode3
Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-TuningCode3
ShapeLLM: Universal 3D Object Understanding for Embodied InteractionCode3
Refusal in Language Models Is Mediated by a Single DirectionCode3
Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language ModelsCode3
ASFT: Aligned Supervised Fine-Tuning through Absolute LikelihoodCode3
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data CompositionCode3
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP TasksCode3
Panda LLM: Training Data and Evaluation for Open-Sourced Chinese Instruction-Following Large Language ModelsCode3
OpenFedLLM: Training Large Language Models on Decentralized Private Data via Federated LearningCode3
MultiModal-GPT: A Vision and Language Model for Dialogue with HumansCode3
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM FinetuningCode3
FlashFace: Human Image Personalization with High-fidelity Identity PreservationCode3
EventRL: Enhancing Event Extraction with Outcome Supervision for Large Language ModelsCode3
NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language ModelsCode3
LongAlign: A Recipe for Long Context Alignment of Large Language ModelsCode3
DistiLLM: Towards Streamlined Distillation for Large Language ModelsCode3
1.5-Pints Technical Report: Pretraining in Days, Not Months -- Your Language Model Thrives on Quality DataCode3
How Can Recommender Systems Benefit from Large Language Models: A SurveyCode3
ComfyBench: Benchmarking LLM-based Agents in ComfyUI for Autonomously Designing Collaborative AI SystemsCode3
LLaMA-Omni2: LLM-based Real-time Spoken Chatbot with Autoregressive Streaming Speech SynthesisCode3
Meta-Chunking: Learning Text Segmentation and Semantic Completion via Logical PerceptionCode3
The Breeze 2 Herd of Models: Traditional Chinese LLMs Based on Llama with Vision-Aware and Function-Calling CapabilitiesCode3
Learning to Decode Collaboratively with Multiple Language ModelsCode2
ChartAssisstant: A Universal Chart Multimodal Language Model via Chart-to-Table Pre-training and Multitask Instruction TuningCode2
Chain-of-Spot: Interactive Reasoning Improves Large Vision-Language ModelsCode2
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task ArithmeticCode2
MiniLLM: Knowledge Distillation of Large Language ModelsCode2
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free LunchCode2
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-ExpertsCode2
Large Language Model Instruction Following: A Survey of Progresses and ChallengesCode2
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials DesignCode2
Aligning Modalities in Vision Large Language Models via Preference Fine-tuningCode2
A Critical Evaluation of AI Feedback for Aligning Large Language ModelsCode2
Critique Fine-Tuning: Learning to Critique is More Effective than Learning to ImitateCode2
Show:102550
← PrevPage 2 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified