SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 9511000 of 1135 papers

TitleStatusHype
SAIF: A Sparse Autoencoder Framework for Interpreting and Steering Instruction Following of Language Models0
SAIL: Search-Augmented Instruction Learning0
SAM-E: Leveraging Visual Foundation Model with Sequence Imitation for Embodied Manipulation0
SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain0
Scalable Ensembling For Mitigating Reward Overoptimisation0
Scalable Vision Language Model Training via High Quality Data Curation0
ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting0
Video Instruction Tuning With Synthetic Data0
Video Unlearning via Low-Rank Refusal Vector0
Argument Quality Assessment in the Age of Instruction-Following Large Language Models0
VidHalluc: Evaluating Temporal Hallucinations in Multimodal Large Language Models for Video Understanding0
HelpSteer3-Preference: Open Human-Annotated Preference Data across Diverse Tasks and Languages0
"Are you telling me to put glasses on the dog?'' Content-Grounded Annotation of Instruction Clarification Requests in the CoDraw Dataset0
X-VILA: Cross-Modality Alignment for Large Language Model0
SeedEdit 3.0: Fast and High-Quality Generative Image Editing0
Are You Human? An Adversarial Benchmark to Expose LLMs0
Vision-Language Models Provide Promptable Representations for Reinforcement Learning0
Are We There Yet? Learning to Localize in Embodied Instruction Following0
VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by Video Spatiotemporal Augmentation0
Self-Boosting Large Language Models with Synthetic Preference Data0
Self-Corrected Multimodal Large Language Model for End-to-End Robot Manipulation0
Self-driven Grounding: Large Language Model Agents with Automatical Language-aligned Skill Learning0
Self-Educated Language Agent with Hindsight Experience Replay for Instruction Following0
HIGhER : Improving instruction following with Hindsight Generation for Experience Replay0
Identifying Reliable Evaluation Metrics for Scientific Text RevisionCode0
Building Accurate Translation-Tailored LLMs with Language Aware Instruction TuningCode0
ASMA-Tune: Unlocking LLMs' Assembly Code Comprehension via Structural-Semantic Instruction TuningCode0
IFShip: Interpretable Fine-grained Ship Classification with Domain Knowledge-Enhanced Vision-Language ModelsCode0
CommonIT: Commonality-Aware Instruction Tuning for Large Language Models via Data PartitionsCode0
PACIT: Unlocking the Power of Examples for Better In-Context Instruction TuningCode0
Multi-Level Compositional Reasoning for Interactive Instruction FollowingCode0
Implicit Cross-Lingual Rewarding for Efficient Multilingual Preference AlignmentCode0
Preference-Guided Reflective Sampling for Aligning Language ModelsCode0
Improving Instruction Following in Language Models through Proxy-Based Uncertainty EstimationCode0
Unintended Impacts of LLM Alignment on Global RepresentationCode0
Third-Party Language Model Performance Prediction from InstructionCode0
CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent CooperationCode0
Pre-Learning Environment Representations for Data-Efficient Neural Instruction FollowingCode0
Aligning Large Language Models by On-Policy Self-JudgmentCode0
PrimeGuard: Safe and Helpful LLMs through Tuning-Free RoutingCode0
Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum LearningCode0
Taking Action Towards Graceful Interaction: The Effects of Performing Actions on Modelling Policies for Instruction Clarification RequestsCode0
CoDe: Blockwise Control for Denoising Diffusion ModelsCode0
LLaVA-Pose: Enhancing Human Pose and Action Understanding via Keypoint-Integrated Instruction TuningCode0
What Prompts Don't Say: Understanding and Managing Underspecification in LLM PromptsCode0
CoDa: Constrained Generation based Data Augmentation for Low-Resource NLPCode0
ProgCo: Program Helps Self-Correction of Large Language ModelsCode0
LLaVA-VSD: Large Language-and-Vision Assistant for Visual Spatial DescriptionCode0
Toward Zero-Shot Instruction FollowingCode0
IndiVec: An Exploration of Leveraging Large Language Models for Media Bias Detection with Fine-Grained Bias IndicatorsCode0
Show:102550
← PrevPage 20 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified