SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 651675 of 1135 papers

TitleStatusHype
SLADE: Shielding against Dual Exploits in Large Vision-Language Models0
HSI-GPT: A General-Purpose Large Scene-Motion-Language Model for Human Scene Interaction0
Hindsight Planner: A Closed-Loop Few-Shot Planner for Embodied Instruction Following0
Find the Intention of Instruction: Comprehensive Evaluation of Instruction Understanding for Large Language ModelsCode0
Internalized Self-Correction for Large Language Models0
LearnLM: Improving Gemini for Learning0
HREF: Human Response-Guided Evaluation of Instruction Following in Language ModelsCode0
Systematic Evaluation of Long-Context LLMs on Financial Concepts0
Length Controlled Generation for Black-box LLMs0
Pipeline Analysis for Developing Instruct LLMs in Low-Resource Languages: A Case Study on Basque0
MetaMorph: Multimodal Understanding and Generation via Instruction Tuning0
A Systematic Examination of Preference Learning through the Lens of Instruction-Following0
Question: How do Large Language Models perform on the Question Answering tasks? Answer:0
LLaVA Steering: Visual Instruction Tuning with 500x Fewer Parameters through Modality Linear Representation-SteeringCode0
Empowering LLMs to Understand and Generate Complex Vector Graphics0
ChipAlign: Instruction Alignment in Large Language Models for Chip Design via Geodesic Interpolation0
Leveraging Large Vision-Language Model as User Intent-aware Encoder for Composed Image Retrieval0
VLR-Bench: Multilingual Benchmark Dataset for Vision-Language Retrieval Augmented Generation0
EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM0
LLaVA-Zip: Adaptive Visual Token Compression with Intrinsic Image Information0
SmolTulu: Higher Learning Rate to Batch Size Ratios Can Lead to Better Reasoning in SLMs0
Sloth: scaling laws for LLM skills to predict multi-benchmark performance across familiesCode0
PediaBench: A Comprehensive Chinese Pediatric Dataset for Benchmarking Large Language ModelsCode0
LLMs for Generalizable Language-Conditioned Policy Learning under Minimal Data Requirements0
GROOT-2: Weakly Supervised Multi-Modal Instruction Following Agents0
Show:102550
← PrevPage 27 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified