SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 9761000 of 1135 papers

TitleStatusHype
Building Accurate Translation-Tailored LLMs with Language Aware Instruction TuningCode0
ASMA-Tune: Unlocking LLMs' Assembly Code Comprehension via Structural-Semantic Instruction TuningCode0
IFShip: Interpretable Fine-grained Ship Classification with Domain Knowledge-Enhanced Vision-Language ModelsCode0
CommonIT: Commonality-Aware Instruction Tuning for Large Language Models via Data PartitionsCode0
PACIT: Unlocking the Power of Examples for Better In-Context Instruction TuningCode0
Multi-Level Compositional Reasoning for Interactive Instruction FollowingCode0
Implicit Cross-Lingual Rewarding for Efficient Multilingual Preference AlignmentCode0
Preference-Guided Reflective Sampling for Aligning Language ModelsCode0
Improving Instruction Following in Language Models through Proxy-Based Uncertainty EstimationCode0
Unintended Impacts of LLM Alignment on Global RepresentationCode0
Third-Party Language Model Performance Prediction from InstructionCode0
CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent CooperationCode0
Pre-Learning Environment Representations for Data-Efficient Neural Instruction FollowingCode0
Aligning Large Language Models by On-Policy Self-JudgmentCode0
PrimeGuard: Safe and Helpful LLMs through Tuning-Free RoutingCode0
Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum LearningCode0
Taking Action Towards Graceful Interaction: The Effects of Performing Actions on Modelling Policies for Instruction Clarification RequestsCode0
CoDe: Blockwise Control for Denoising Diffusion ModelsCode0
LLaVA-Pose: Enhancing Human Pose and Action Understanding via Keypoint-Integrated Instruction TuningCode0
What Prompts Don't Say: Understanding and Managing Underspecification in LLM PromptsCode0
CoDa: Constrained Generation based Data Augmentation for Low-Resource NLPCode0
ProgCo: Program Helps Self-Correction of Large Language ModelsCode0
LLaVA-VSD: Large Language-and-Vision Assistant for Visual Spatial DescriptionCode0
Toward Zero-Shot Instruction FollowingCode0
IndiVec: An Exploration of Leveraging Large Language Models for Media Bias Detection with Fine-Grained Bias IndicatorsCode0
Show:102550
← PrevPage 40 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified