Leveraging Large (Visual) Language Models for Robot 3D Scene Understanding
William Chen, Siyi Hu, Rajat Talak, Luca Carlone
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/mit-spark/llm_scene_understandingOfficialIn paperpytorch★ 89
Abstract
Abstract semantic 3D scene understanding is a problem of critical importance in robotics. As robots still lack the common-sense knowledge about household objects and locations of an average human, we investigate the use of pre-trained language models to impart common sense for scene understanding. We introduce and compare a wide range of scene classification paradigms that leverage language only (zero-shot, embedding-based, and structured-language) or vision and language (zero-shot and fine-tuned). We find that the best approaches in both categories yield 70\% room classification accuracy, exceeding the performance of pure-vision and graph classifiers. We also find such methods demonstrate notable generalization and transfer capabilities stemming from their use of language.