SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 901925 of 1135 papers

TitleStatusHype
Fool Your (Vision and) Language Model With Embarrassingly Simple PermutationsCode1
PACIT: Unlocking the Power of Examples for Better In-Context Instruction TuningCode0
Back to the Future: Towards Explainable Temporal Reasoning with Large Language ModelsCode1
Beyond Task Performance: Evaluating and Reducing the Flaws of Large Multimodal Models with In-Context LearningCode1
Reformulating Vision-Language Foundation Models and Datasets Towards Universal Multimodal AssistantsCode2
SLM: Bridge the thin gap between speech and text foundation models0
From Language Modeling to Instruction Following: Understanding the Behavior Shift in LLMs after Instruction TuningCode1
Self-Specialization: Uncovering Latent Expertise within Large Language Models0
ModuLoRA: Finetuning 2-Bit LLMs on Consumer GPUs by Integrating with Modular QuantizersCode2
MentaLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language ModelsCode2
Towards LLM-guided Causal Explainability for Black-box Text Classifiers0
Frustrated with Code Quality Issues? LLMs can Help!0
AceGPT, Localizing Large Language Models in ArabicCode1
LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation DatasetCode7
LongLoRA: Efficient Fine-tuning of Long-Context Large Language ModelsCode6
Natural Language Embedded Programs for Hybrid Language Symbolic ReasoningCode1
Instruction-Following Speech Recognition0
Monolingual or Multilingual Instruction Tuning: Which Makes a Better AlpacaCode0
Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference0
TextBind: Multi-turn Interleaved Multimodal Instruction-following in the WildCode1
DoG-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded Instruction WrappingCode0
Quantifying and Attributing the Hallucination of Large Language Models via Association Analysis0
Efficient Finetuning Large Language Models For Vietnamese Chatbot0
ImageBind-LLM: Multi-modality Instruction TuningCode5
Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty0
Show:102550
← PrevPage 37 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified