SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 10011050 of 1135 papers

TitleStatusHype
LLaVA Steering: Visual Instruction Tuning with 500x Fewer Parameters through Modality Linear Representation-SteeringCode0
Bayesian Calibration of Win Rate Estimation with LLM EvaluatorsCode0
Teaching Llama a New Language Through Cross-Lingual Knowledge TransferCode0
DoG-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded Instruction WrappingCode0
LLM as Dataset Analyst: Subpopulation Structure Discovery with Large Language ModelCode0
HalLoc: Token-level Localization of Hallucinations for Vision Language ModelsCode0
FALCON: Feedback-driven Adaptive Long/short-term memory reinforced Coding Optimization systemCode0
Empowering Persian LLMs for Instruction Following: A Novel Dataset and Training ApproachCode0
Towards Robust Instruction Tuning on Multimodal Large Language ModelsCode0
MuSC: Improving Complex Instruction Following with Multi-granularity Self-Contrastive TrainingCode0
InstructAny2Pix: Flexible Visual Editing via Multimodal Instruction FollowingCode0
A safety realignment framework via subspace-oriented model fusion for large language modelsCode0
Quantifying Self-diagnostic Atomic Knowledge in Chinese Medical Foundation Model: A Computational AnalysisCode0
NatSGLD: A Dataset with Speech, Gesture, Logic, and Demonstration for Robot Learning in Natural Human-Robot InteractionCode0
Instruction Clarification Requests in Multimodal Collaborative Dialogue Games: Tasks, and an Analysis of the CoDraw DatasetCode0
Analysis of Language Change in Collaborative Instruction FollowingCode0
T-REG: Preference Optimization with Token-Level Reward RegularizationCode0
Spatial Language Understanding for Object Search in Partially Observed City-scale EnvironmentsCode0
GoalNet: Inferring Conjunctive Goal Predicates from Human Plan Demonstrations for Robot Instruction FollowingCode0
How You Prompt Matters! Even Task-Oriented Constraints in Instructions Affect LLM-Generated Text DetectionCode0
Localized Symbolic Knowledge Distillation for Visual Commonsense ModelsCode0
Hierarchical Modular Framework for Long Horizon Instruction FollowingCode0
Instruction Following with Goal-Conditioned Reinforcement Learning in Virtual EnvironmentsCode0
Aligners: Decoupling LLMs and AlignmentCode0
AdaPPA: Adaptive Position Pre-Fill Jailbreak Attack Approach Targeting LLMsCode0
Instruction Makes a DifferenceCode0
How to Leverage Demonstration Data in Alignment for Large Language Model? A Self-Imitation Learning PerspectiveCode0
LoLDU: Low-Rank Adaptation via Lower-Diag-Upper Decomposition for Parameter-Efficient Fine-TuningCode0
Align^2LLaVA: Cascaded Human and Large Language Model Preference Alignment for Multi-modal Instruction CurationCode0
Automated curriculum generation for Policy Gradients from DemonstrationsCode0
FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, ToxicityCode0
Adversarial Moment-Matching Distillation of Large Language ModelsCode0
Instruct-SkillMix: A Powerful Pipeline for LLM Instruction TuningCode0
Rate, Explain and Cite (REC): Enhanced Explanation and Attribution in Automatic Evaluation by Large Language ModelsCode0
InstUPR : Instruction-based Unsupervised Passage Reranking with Large Language ModelsCode0
Zero-shot LLM-guided Counterfactual Generation: A Case Study on NLP Model EvaluationCode0
Do LLMs estimate uncertainty well in instruction-following?Code0
Look Wide and Interpret Twice: Improving Performance on Interactive Instruction-following TasksCode0
Internal Causal Mechanisms Robustly Predict Language Model Out-of-Distribution BehaviorsCode0
WildIFEval: Instruction Following in the WildCode0
Evaluating Judges as Evaluators: The JETTS Benchmark of LLM-as-Judges as Test-Time Scaling EvaluatorsCode0
Sloth: scaling laws for LLM skills to predict multi-benchmark performance across familiesCode0
How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their VulnerabilitiesCode0
TextGames: Learning to Self-Play Text-Based Puzzle Games via Language Model ReasoningCode0
Token-Efficient Leverage Learning in Large Language ModelsCode0
Find the Intention of Instruction: Comprehensive Evaluation of Instruction Understanding for Large Language ModelsCode0
Generalization Analogies: A Testbed for Generalizing AI Oversight to Hard-To-Measure DomainsCode0
Opt-Out: Investigating Entity-Level Unlearning for Large Language Models via Optimal TransportCode0
TF1-EN-3M: Three Million Synthetic Moral Fables for Training Small, Open Language ModelsCode0
Evaluating the Instruction-following Abilities of Language Models using Knowledge TasksCode0
Show:102550
← PrevPage 21 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified