SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 10511100 of 1135 papers

TitleStatusHype
Instruction Mining: Instruction Data Selection for Tuning Large Language Models0
Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning0
Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control0
KITE: Keypoint-Conditioned Policies for Semantic Manipulation0
CorNav: Autonomous Agent with Self-Corrected Planning for Zero-Shot Vision-and-Language Navigation0
"Are you telling me to put glasses on the dog?'' Content-Grounded Annotation of Instruction Clarification Requests in the CoDraw Dataset0
Controllable Text-to-Image Generation with GPT-40
A Reminder of its Brittleness: Language Reward Shaping May Hinder Learning for Instruction Following AgentsCode0
SAIL: Search-Augmented Instruction Learning0
A Monte Carlo Language Model Pipeline for Zero-Shot Sociopolitical Event Extraction0
Multi-Task Instruction Tuning of LLaMa for Specific Scenarios: A Preliminary Study on Writing Assistance0
Multimodal Web Navigation with Instruction-Finetuned Foundation Models0
Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach0
Knowledge-enhanced Agents for Interactive Text Games0
Accessible Instruction-Following Agent0
Retrieval Augmented Chest X-Ray Report Generation using OpenAI GPT models0
Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents0
A Comparative Study between Full-Parameter and LoRA-based Fine-Tuning on Chinese Instruction Data for Instruction Following Large Language Model0
Embodied Concept Learner: Self-supervised Learning of Concepts and Mapping through Instruction Following0
ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models0
Instruction Clarification Requests in Multimodal Collaborative Dialogue Games: Tasks, and an Analysis of the CoDraw DatasetCode0
Natural Language-conditioned Reinforcement Learning with Inside-out Task Language Development and Translation0
Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks0
Distilling Internet-Scale Vision-Language Models into Embodied Agents0
Multimodal Sequential Generative Models for Semi-Supervised Language Instruction Following0
Continual Learning for Instruction Following from Realtime FeedbackCode0
CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation0
UGIF: UI Grounded Instruction Following0
Learning to Follow Instructions in Text-Based GamesCode0
Prompter: Utilizing Large Language Model Prompting for a Data Efficient Embodied Instruction Following0
Don't Copy the Teacher: Data and Model Challenges in Embodied DialogueCode0
A New Path: Scaling Vision-and-Language Navigation with Synthetic Instructions and Imitation Learning0
Iterative Vision-and-Language Navigation0
Language Models are General-Purpose Interfaces0
GoalNet: Inferring Conjunctive Goal Predicates from Human Plan Demonstrations for Robot Instruction FollowingCode0
Summarizing a virtual robot's past actions in natural language0
Compositionality as Lexical SymmetryCode0
Less is More: Generating Grounded Navigation Instructions from Landmarks0
Explicit Object Relation Alignment for Vision and Language Navigation0
Skill Induction and Planning with Latent Language0
Compositional Data and Task Augmentation for Instruction Following0
Improving the Robustness to Variations of Objects and Instructions with a Neuro-Symbolic Approach for Interactive Instruction Following0
Hierarchical Modular Framework for Long Horizon Instruction FollowingCode0
Procedures as Programs: Hierarchical Control of Situated Agents through Natural Language0
Analysis of Language Change in Collaborative Instruction FollowingCode0
Modular Framework for Visuomotor Language Grounding0
Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning0
Draw Me a Flower: Processing and Grounding Abstraction in Natural Language0
Zero-shot Task Adaptation using Natural Language0
Look Wide and Interpret Twice: Improving Performance on Interactive Instruction-following TasksCode0
Show:102550
← PrevPage 22 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified