SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 801850 of 1135 papers

TitleStatusHype
CROME: Cross-Modal Adapters for Efficient Multimodal LLM0
IFShip: Interpretable Fine-grained Ship Classification with Domain Knowledge-Enhanced Vision-Language ModelsCode0
Space-LLaVA: a Vision-Language Model Adapted to Extraterrestrial Applications0
Creating Arabic LLM Prompts at Scale0
LLaVA-VSD: Large Language-and-Vision Assistant for Visual Spatial DescriptionCode0
Empirical Analysis of Large Vision-Language Models against Goal Hijacking via Visual Prompt Injection0
EXAONE 3.0 7.8B Instruction Tuned Language Model0
A Framework for Fine-Tuning LLMs using Heterogeneous Feedback0
Semantic Skill Grounding for Embodied Instruction-Following in Cross-Domain Environments0
Dancing in Chains: Reconciling Instruction Following and Faithfulness in Language ModelsCode0
SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain0
PrimeGuard: Safe and Helpful LLMs through Tuning-Free RoutingCode0
HAPFI: History-Aware Planning based on Fused Information0
Failures to Find Transferable Image Jailbreaks Between Vision-Language Models0
ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities0
Empowering Persian LLMs for Instruction Following: A Novel Dataset and Training ApproachCode0
Situated Instruction Following0
Instruction Following with Goal-Conditioned Reinforcement Learning in Virtual EnvironmentsCode0
Beyond Instruction Following: Evaluating Inferential Rule Following of Large Language Models0
LVLM-empowered Multi-modal Representation Learning for Visual Place Recognition0
From Loops to Oops: Fallback Behaviors of Language Models Under UncertaintyCode0
Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course0
Diverse and Fine-Grained Instruction-Following Ability Exploration with Synthetic Data0
Semantic Graphs for Syntactic Simplification: A Revisit from the Age of LLMCode0
Pelican: Correcting Hallucination in Vision-LLMs via Claim Decomposition and Program of Thought Verification0
D-Rax: Domain-specific Radiologic assistant leveraging multi-modal data and eXpert model predictions0
Improving Multilingual Instruction Finetuning via Linguistically Natural and Diverse Datasets0
Iterative Data Generation with Large Language Models for Aspect-based Sentiment Analysis0
ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting0
DeSTA: Enhancing Speech Language Models through Descriptive Speech-Text Alignment0
OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents0
Role-Play Zero-Shot Prompting with Large Language Models for Open-Domain Human-Machine Conversation0
A Text is Worth Several Tokens: Text Embedding from LLMs Secretly Aligns Well with The Key Tokens0
Following Length Constraints in Instructions0
Evaluation of Instruction-Following Ability for Large Language Models on Story-Ending Generation0
Teach Better or Show Smarter? On Instructions and Exemplars in Automatic Prompt Optimization0
AdaGrad under Anisotropic Smoothness0
DEM: Distribution Edited Model for Training with Mixed Data Distributions0
IWISDM: Assessing instruction following in multimodal models at scaleCode0
VLM Agents Generate Their Own Memories: Distilling Experience into Embodied Programs of Thought0
Biomedical Visual Instruction Tuning with Clinician Preference AlignmentCode0
The Comparative Trap: Pairwise Comparisons Amplifies Biased Preferences of LLM Evaluators0
Opt-Out: Investigating Entity-Level Unlearning for Large Language Models via Optimal TransportCode0
Unveiling the Flaws: Exploring Imperfections in Synthetic Data and Mitigation Strategies for Large Language Models0
Refine Large Language Model Fine-tuning via Instruction Vector0
Embodied Instruction Following in Unknown Environments0
Enhancing and Assessing Instruction-Following with Fine-Grained Instruction Variants0
How Far Can In-Context Alignment Go? Exploring the State of In-Context Alignment0
Generative Visual Instruction TuningCode0
Grade Score: Quantifying LLM Performance in Option SelectionCode0
Show:102550
← PrevPage 17 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified