SOTAVerified

Virtual Guidance as a Mid-level Representation for Navigation with Augmented Reality

2023-03-05Unverified0· sign in to hype

Hsuan-Kung Yang, Tsung-Chih Chiang, Jou-Min Liu, Ting-Ru Liu, Chun-Wei Huang, Tsu-Ching Hsiao, Chun-Yi Lee

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In the context of autonomous navigation, effectively conveying abstract navigational cues to agents in dynamic environments presents significant challenges, particularly when navigation information is derived from diverse modalities such as both vision and high-level language descriptions. To address this issue, we introduce a novel technique termed `Virtual Guidance,' which is designed to visually represent non-visual instructional signals. These visual cues are overlaid onto the agent's camera view and served as comprehensible navigational guidance signals. To validate the concept of virtual guidance, we propose a sim-to-real framework that enables the transfer of the trained policy from simulated environments to real world, ensuring the adaptability of virtual guidance in practical scenarios. We evaluate and compare the proposed method against a non-visual guidance baseline through detailed experiments in simulation. The experimental results demonstrate that the proposed virtual guidance approach outperforms the baseline methods across multiple scenarios and offers clear evidence of its effectiveness in autonomous navigation tasks.

Tasks

Reproductions