SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 5175 of 1135 papers

TitleStatusHype
Instruction Tuning with GPT-4Code4
How Far Can Camels Go? Exploring the State of Instruction Tuning on Open ResourcesCode4
SimPO: Simple Preference Optimization with a Reference-Free RewardCode4
Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt InjectionCode4
Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-TuningCode3
Refusal in Language Models Is Mediated by a Single DirectionCode3
Self-play with Execution Feedback: Improving Instruction-following Capabilities of Large Language ModelsCode3
ComfyBench: Benchmarking LLM-based Agents in ComfyUI for Autonomously Designing Collaborative AI SystemsCode3
Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language ModelsCode3
ShapeLLM: Universal 3D Object Understanding for Embodied InteractionCode3
AlpacaFarm: A Simulation Framework for Methods that Learn from Human FeedbackCode3
Panda LLM: Training Data and Evaluation for Open-Sourced Chinese Instruction-Following Large Language ModelsCode3
NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language ModelsCode3
MultiModal-GPT: A Vision and Language Model for Dialogue with HumansCode3
OpenFedLLM: Training Large Language Models on Decentralized Private Data via Federated LearningCode3
Caption Anything: Interactive Image Description with Diverse Multimodal ControlsCode3
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data CompositionCode3
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP TasksCode3
Meta-Chunking: Learning Text Segmentation and Semantic Completion via Logical PerceptionCode3
EventRL: Enhancing Event Extraction with Outcome Supervision for Large Language ModelsCode3
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM FinetuningCode3
LLaMA-Omni2: LLM-based Real-time Spoken Chatbot with Autoregressive Streaming Speech SynthesisCode3
LongAlign: A Recipe for Long Context Alignment of Large Language ModelsCode3
DistiLLM: Towards Streamlined Distillation for Large Language ModelsCode3
Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-AlignmentCode3
Show:102550
← PrevPage 3 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified