SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 10511100 of 1135 papers

TitleStatusHype
UGIF: UI Grounded Instruction Following0
Learning to Follow Instructions in Text-Based GamesCode0
Prompter: Utilizing Large Language Model Prompting for a Data Efficient Embodied Instruction Following0
Instruction-Following Agents with Multimodal TransformerCode1
DANLI: Deliberative Agent for Following Natural Language InstructionsCode1
Don't Copy the Teacher: Data and Model Challenges in Embodied DialogueCode0
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft PromptCode1
A New Path: Scaling Vision-and-Language Navigation with Synthetic Instructions and Imitation Learning0
Iterative Vision-and-Language Navigation0
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and ActionCode2
Language Models are General-Purpose Interfaces0
GoalNet: Inferring Conjunctive Goal Predicates from Human Plan Demonstrations for Robot Instruction FollowingCode0
Engineering flexible machine learning systems by traversing functionally-invariant pathsCode1
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP TasksCode3
Inferring Rewards from Language in ContextCode1
Counterfactual Cycle-Consistent Learning for Instruction Following and Generation in Vision-Language NavigationCode1
Summarizing a virtual robot's past actions in natural language0
Combining Modular Skills in Multitask LearningCode1
DialFRED: Dialogue-Enabled Agents for Embodied Instruction FollowingCode1
Compositionality as Lexical SymmetryCode0
Less is More: Generating Grounded Navigation Instructions from Landmarks0
Explicit Object Relation Alignment for Vision and Language Navigation0
Skill Induction and Planning with Latent Language0
Guiding Multi-Step Rearrangement Tasks with Natural Language InstructionsCode1
Compositional Data and Task Augmentation for Instruction Following0
Improving the Robustness to Variations of Objects and Instructions with a Neuro-Symbolic Approach for Interactive Instruction Following0
FILM: Following Instructions in Language with Modular MethodsCode1
Waypoint Models for Instruction-guided Navigation in Continuous EnvironmentsCode1
Hierarchical Modular Framework for Long Horizon Instruction FollowingCode0
Procedures as Programs: Hierarchical Control of Situated Agents through Natural Language0
Analysis of Language Change in Collaborative Instruction FollowingCode0
Modular Framework for Visuomotor Language Grounding0
Lexicon Learning for Few Shot Sequence ModelingCode1
Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning0
Draw Me a Flower: Processing and Grounding Abstraction in Natural Language0
Room-and-Object Aware Knowledge Reasoning for Remote Embodied Referring ExpressionCode1
Lexicon Learning for Few-Shot Neural Sequence ModelingCode1
Zero-shot Task Adaptation using Natural Language0
Generalization in Instruction Following Systems0
Look Wide and Interpret Twice: Improving Performance on Interactive Instruction-following TasksCode0
PanGEA: The Panoramic Graph Environment Annotation Toolkit0
A modular vision language navigation and manipulation framework for long horizon compositional tasks in indoor environmentCode1
Are We There Yet? Learning to Localize in Embodied Instruction Following0
Factorizing Perception and Policy for Interactive Instruction FollowingCode1
Spatial Language Understanding for Object Search in Partially Observed City-scale EnvironmentsCode0
From “Before” to “After”: Generating Natural Language Instructions from Image Pairs in a Simple Visual Domain0
Few-shot Object Grounding and Mapping for Natural Language Robot Instruction FollowingCode1
RMM: A Recursive Mental Model for Dialogue NavigationCode1
Modular Networks for Compositional Instruction Following0
Learning to Recombine and Resample Data for Compositional GeneralizationCode0
Show:102550
← PrevPage 22 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified