SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 11011125 of 1135 papers

TitleStatusHype
Generalization in Instruction Following Systems0
PanGEA: The Panoramic Graph Environment Annotation Toolkit0
Are We There Yet? Learning to Localize in Embodied Instruction Following0
Spatial Language Understanding for Object Search in Partially Observed City-scale EnvironmentsCode0
From “Before” to “After”: Generating Natural Language Instructions from Image Pairs in a Simple Visual Domain0
Modular Networks for Compositional Instruction Following0
Learning to Recombine and Resample Data for Compositional GeneralizationCode0
Inverse Reinforcement Learning with Natural Language Goals0
Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation0
Language-Conditioned Goal Generation: a New Approach to Language Grounding in RL0
Language-Conditioned Goal Generation: a New Approach to Language Grounding for RL0
Human Instruction-Following with Deep Reinforcement Learning via Transfer-Learning from Text0
Language Conditioned Imitation Learning over Unstructured Data0
Following Instructions by Imagining and Reaching Visual Goals0
Automated curriculum generation for Policy Gradients from DemonstrationsCode0
Language-guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments0
HIGhER : Improving instruction following with Hindsight Generation for Experience Replay0
Robust Instruction-Following in a Situated Agent via Transfer-Learning from Text0
Self-Educated Language Agent with Hindsight Experience Replay for Instruction Following0
Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization0
Pre-Learning Environment Representations for Data-Efficient Neural Instruction FollowingCode0
Chasing Ghosts: Instruction Following as Bayesian State TrackingCode0
Language as an Abstraction for Hierarchical Deep Reinforcement LearningCode0
A Survey of Reinforcement Learning Informed by Natural Language0
Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation0
Show:102550
← PrevPage 45 of 46Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified