SOTAVerified

LiLMaps: Learnable Implicit Language Maps

2025-01-06Unverified0· sign in to hype

Evgenii Kruzhkov, Sven Behnke

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

One of the current trends in robotics is to employ large language models (LLMs) to provide non-predefined command execution and natural human-robot interaction. It is useful to have an environment map together with its language representation, which can be further utilized by LLMs. Such a comprehensive scene representation enables numerous ways of interaction with the map for autonomously operating robots. In this work, we present an approach that enhances incremental implicit mapping through the integration of vision-language features. Specifically, we (i) propose a decoder optimization technique for implicit language maps which can be used when new objects appear on the scene, and (ii) address the problem of inconsistent vision-language predictions between different viewing positions. Our experiments demonstrate the effectiveness of LiLMaps and solid improvements in performance.

Tasks

Reproductions