SOTAVerified

Instruction Following

Instruction following is the basic task of the model. This task is dedicated to evaluating the ability of the large model to follow human instructions. It is hoped that the model can generate controllable and safe answers.

Papers

Showing 11011135 of 1135 papers

TitleStatusHype
AllenAct: A Framework for Embodied AI ResearchCode1
Inverse Reinforcement Learning with Natural Language Goals0
Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation0
Language-Conditioned Goal Generation: a New Approach to Language Grounding in RL0
Language-Conditioned Goal Generation: a New Approach to Language Grounding for RL0
Human Instruction-Following with Deep Reinforcement Learning via Transfer-Learning from Text0
Language Conditioned Imitation Learning over Unstructured Data0
RMM: A Recursive Mental Model for Dialog NavigationCode1
Zero-Shot Compositional Policy Learning via Language GroundingCode1
Following Instructions by Imagining and Reaching Visual Goals0
Automated curriculum generation for Policy Gradients from DemonstrationsCode0
Language-guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments0
Learning to Map Natural Language Instructions to Physical Quadcopter Control using Simulated FlightCode1
HIGhER : Improving instruction following with Hindsight Generation for Experience Replay0
Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization0
Robust Instruction-Following in a Situated Agent via Transfer-Learning from Text0
Self-Educated Language Agent with Hindsight Experience Replay for Instruction Following0
Pre-Learning Environment Representations for Data-Efficient Neural Instruction FollowingCode0
Chasing Ghosts: Instruction Following as Bayesian State TrackingCode0
Language as an Abstraction for Hierarchical Deep Reinforcement LearningCode0
The Replica Dataset: A Digital Replica of Indoor SpacesCode2
A Survey of Reinforcement Learning Informed by Natural Language0
Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation0
Compositional pre-training for neural semantic parsing0
Habitat: A Platform for Embodied AI ResearchCode2
Learning To Follow Directions in Street ViewCode0
From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following0
Learning to Navigate the Web0
Guiding Policies with Language via Meta-LearningCode0
Mapping Navigation Instructions to Continuous Control Actions with Position-Visitation PredictionCode0
Mapping Instructions to Actions in 3D Environments with Visual Goal PredictionCode0
Neural Semantic Parsing0
Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation LearningCode1
Grounding Language by Continuous Observation of Instruction Following0
Alignment-based compositional semantics for instruction followingCode0
Show:102550
← PrevPage 23 of 23Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AutoIF (Llama3 70B)Inst-level loose-accuracy90.4Unverified
2AutoIF (Qwen2 72B)Inst-level loose-accuracy88Unverified
3GPT-4Inst-level loose-accuracy85.37Unverified
4PaLM 2 SInst-level loose-accuracy59.11Unverified