SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 10761100 of 1135 papers

TitleStatusHype
Continual Learning for Instruction Following from Realtime FeedbackCode0
CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation0
UGIF: UI Grounded Instruction Following0
Learning to Follow Instructions in Text-Based GamesCode0
Prompter: Utilizing Large Language Model Prompting for a Data Efficient Embodied Instruction Following0
Don't Copy the Teacher: Data and Model Challenges in Embodied DialogueCode0
A New Path: Scaling Vision-and-Language Navigation with Synthetic Instructions and Imitation Learning0
Iterative Vision-and-Language Navigation0
Language Models are General-Purpose Interfaces0
GoalNet: Inferring Conjunctive Goal Predicates from Human Plan Demonstrations for Robot Instruction FollowingCode0
Summarizing a virtual robot's past actions in natural language0
Compositionality as Lexical SymmetryCode0
Less is More: Generating Grounded Navigation Instructions from Landmarks0
Explicit Object Relation Alignment for Vision and Language Navigation0
Skill Induction and Planning with Latent Language0
Compositional Data and Task Augmentation for Instruction Following0
Improving the Robustness to Variations of Objects and Instructions with a Neuro-Symbolic Approach for Interactive Instruction Following0
Hierarchical Modular Framework for Long Horizon Instruction FollowingCode0
Procedures as Programs: Hierarchical Control of Situated Agents through Natural Language0
Analysis of Language Change in Collaborative Instruction FollowingCode0
Modular Framework for Visuomotor Language Grounding0
Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning0
Draw Me a Flower: Processing and Grounding Abstraction in Natural Language0
Zero-shot Task Adaptation using Natural Language0
Look Wide and Interpret Twice: Improving Performance on Interactive Instruction-following TasksCode0
Show:102550
← PrevPage 44 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified