SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 301350 of 1135 papers

TitleStatusHype
Lumen: Unleashing Versatile Vision-Centric Capabilities of Large Multimodal ModelsCode1
BLEUBERI: BLEU is a surprisingly effective reward for instruction followingCode1
LoGU: Long-form Generation with Uncertainty ExpressionsCode1
AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source DataCode1
Facial Dynamics in Video: Instruction Tuning for Improved Facial Expression Perception and Contextual AwarenessCode1
M3DBench: Let's Instruct Large Models with Multi-modal 3D PromptsCode1
LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias of LLMsCode1
Extend Model Merging from Fine-Tuned to Pre-Trained Large Language Models via Weight DisentanglementCode1
Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New LanguagesCode1
LLM-CXR: Instruction-Finetuned LLM for CXR Image Understanding and GenerationCode1
LLaSA: A Multimodal LLM for Human Activity Analysis Through Wearable and Smartphone SensorsCode1
Agri-LLaVA: Knowledge-Infused Large Multimodal Assistant on Agricultural Pests and DiseasesCode1
Ex3: Automatic Novel Writing by Extracting, Excelsior and ExpandingCode1
AceGPT, Localizing Large Language Models in ArabicCode1
ChatGPT may Pass the Bar Exam soon, but has a Long Way to Go for the LexGLUE benchmarkCode1
EventHallusion: Diagnosing Event Hallucinations in Video LLMsCode1
Lexicon Learning for Few Shot Sequence ModelingCode1
Fool Your (Vision and) Language Model With Embarrassingly Simple PermutationsCode1
ChatBug: A Common Vulnerability of Aligned LLMs Induced by Chat TemplatesCode1
Lexicon Learning for Few-Shot Neural Sequence ModelingCode1
Evaluating LLMs at Detecting Errors in LLM ResponsesCode1
Evaluating Large Language Models at Evaluating Instruction FollowingCode1
AGENTIF: Benchmarking Instruction Following of Large Language Models in Agentic ScenariosCode1
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question AnsweringCode1
LLaMo: Large Language Model-based Molecular Graph AssistantCode1
LIONs: An Empirically Optimized Approach to Align Language ModelsCode1
LASeR: Learning to Adaptively Select Reward Models with Multi-Armed BanditsCode1
ChartInstruct: Instruction Tuning for Chart Comprehension and ReasoningCode1
DocLens: Multi-aspect Fine-grained Evaluation for Medical Text GenerationCode1
Large Language Models as Evaluators for Recommendation ExplanationsCode1
Enhancing Cross-Tokenizer Knowledge Distillation with Contextual Dynamical MappingCode1
CB2: Collaborative Natural Language Interaction Research PlatformCode1
Learning to Map Natural Language Instructions to Physical Quadcopter Control using Simulated FlightCode1
Facial Affective Behavior Analysis with Instruction TuningCode1
Engineering flexible machine learning systems by traversing functionally-invariant pathsCode1
Lana: A Language-Capable Navigator for Instruction Following and GenerationCode1
FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language ModelsCode1
Are Emergent Abilities in Large Language Models just In-Context Learning?Code1
Language-Conditioned Reinforcement Learning to Solve Misunderstandings with Action CorrectionsCode1
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language ModelsCode1
F-Eval: Assessing Fundamental Abilities with Refined Evaluation MethodsCode1
Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMsCode1
Few-shot Object Grounding and Mapping for Natural Language Robot Instruction FollowingCode1
Alexa Arena: A User-Centric Interactive Platform for Embodied AICode1
FILM: Following Instructions in Language with Modular MethodsCode1
Finding Blind Spots in Evaluator LLMs with Interpretable ChecklistsCode1
MergeBench: A Benchmark for Merging Domain-Specialized LLMsCode1
Making Large Language Models Better Data CreatorsCode1
Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation LearningCode1
Kun: Answer Polishment for Chinese Self-Alignment with Instruction Back-TranslationCode1
Show:102550
← PrevPage 7 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified