SOTAVerified

Spatial Reasoning

Papers

Showing 150 of 453 papers

TitleStatusHype
MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language ModelsCode7
When LLMs step into the 3D World: A Survey and Meta-Analysis of 3D Tasks via Multi-modal Large Language ModelsCode7
Improved Baselines with Visual Instruction TuningCode6
Visual Instruction TuningCode6
GPT-4 Technical ReportCode6
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and BeyondCode5
Video-R1: Reinforcing Video Reasoning in MLLMsCode4
Thinking in Space: How Multimodal Large Language Models See, Remember, and Recall SpacesCode4
SAT: Dynamic Spatial Aptitude Training for Multimodal Language ModelsCode4
Sonata: Self-Supervised Learning of Reliable Point RepresentationsCode4
Factorio Learning EnvironmentCode4
PointVLA: Injecting the 3D World into Vision-Language-Action ModelsCode4
SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object ManipulationCode3
CityWalker: Learning Embodied Urban Navigation from Web-Scale VideosCode3
MetaSpatial: Reinforcing 3D Spatial Reasoning in VLMs for the MetaverseCode3
VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D ReconstructionCode3
SpatialBot: Precise Spatial Understanding with Vision Language ModelsCode3
Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language ModelsCode3
Unleashing the Temporal-Spatial Reasoning Capacity of GPT for Training-Free Audio and Language Referenced Video Object SegmentationCode2
Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive TasksCode2
End-to-End Navigation with Vision Language Models: Transforming Spatial Reasoning into Question-AnsweringCode2
Text-to-CadQuery: A New Paradigm for CAD Generation with Scalable Large Model CapabilitiesCode2
Embodied-R: Collaborative Framework for Activating Embodied Spatial Reasoning in Foundation Models via Reinforcement LearningCode2
AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via GRPOCode2
SpatialScore: Towards Unified Evaluation for Multimodal Spatial UnderstandingCode2
TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-ActionCode2
Imagine while Reasoning in Space: Multimodal Visualization-of-ThoughtCode2
Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial ReasoningCode2
SpaceR: Reinforcing MLLMs in Video Spatial ReasoningCode2
ThinkGeo: Evaluating Tool-Augmented Agents for Remote Sensing TasksCode2
Seeing the roads through the trees: A benchmark for modeling spatial dependencies with aerial imageryCode2
Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual DrawingCode2
Probing the limitations of multimodal language models for chemistry and materials researchCode2
Act3D: 3D Feature Field Transformers for Multi-Task Robotic ManipulationCode2
On the Road with GPT-4V(ision): Early Explorations of Visual-Language Model on Autonomous DrivingCode2
Free-form language-based robotic reasoning and graspingCode2
Chat-3D: Data-efficiently Tuning Large Language Model for Universal Dialogue of 3D ScenesCode2
Locality Alignment Improves Vision-Language ModelsCode2
From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3DCode2
Flow of Reasoning:Training LLMs for Divergent Problem Solving with Minimal ExamplesCode2
LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language ModelsCode2
ConceptFusion: Open-set Multimodal 3D MappingCode2
Introducing Visual Perception Token into Multimodal Large Language ModelCode2
InfiGUI-R1: Advancing Multimodal GUI Agents from Reactive Actors to Deliberative ReasonersCode2
GoT-R1: Unleashing Reasoning Capability of MLLM for Visual Generation with Reinforcement LearningCode2
Getting it Right: Improving Spatial Consistency in Text-to-Image ModelsCode2
IRef-VLA: A Benchmark for Interactive Referential Grounding with Imperfect Language in 3D ScenesCode2
DriveMLLM: A Benchmark for Spatial Understanding with Multimodal Large Language Models in Autonomous DrivingCode2
BLIVA: A Simple Multimodal LLM for Better Handling of Text-Rich Visual QuestionsCode2
Inference-Time Scaling for Complex Tasks: Where We Stand and What Lies AheadCode2
Show:102550
← PrevPage 1 of 10Next →

No leaderboard results yet.