SOTAVerified

3D visual grounding

Papers

Showing 125 of 82 papers

TitleStatusHype
ViewSRD: 3D Visual Grounding via Structured Multi-View Decomposition0
A Neural Representation Framework with LLM-Driven Spatial Reasoning for Open-Vocabulary 3D Visual Grounding0
SPAZER: Spatial-Semantic Progressive Reasoning Agent for Zero-shot 3D Visual Grounding0
GroundFlow: A Plug-in Module for Temporal Reasoning on 3D Point Cloud Sequential Grounding0
I Speak and You Find: Robust 3D Visual Grounding with Noisy and Ambiguous Speech Inputs0
Unified Representation Space for 3D Visual Grounding0
From Objects to Anywhere: A Holistic Benchmark for Multi-level Visual Grounding in 3D Scenes0
Zero-Shot 3D Visual Grounding from Vision-Language Models0
Extending Large Vision-Language Model for Diverse Interactive Tasks in Autonomous DrivingCode1
DenseGrounding: Improving Dense Language-Vision Semantics for Ego-Centric 3D Visual Grounding0
AS3D: 2D-Assisted Cross-Modal Understanding with Semantic-Spatial Scene Graphs for 3D Visual GroundingCode0
Ges3ViG: Incorporating Pointing Gestures into Language-Based 3D Visual Grounding for Embodied Reference UnderstandingCode0
DSM: Building A Diverse Semantic Map for 3D Visual Grounding0
ReasonGrounder: LVLM-Guided Hierarchical Feature Splatting for Open-Vocabulary 3D Visual Grounding and Reasoning0
Unveiling the Mist over 3D Vision-Language Understanding: Object-centric Evaluation with Chain-of-AnalysisCode1
NuGrounding: A Multi-View 3D Visual Grounding Framework in Autonomous Driving0
ProxyTransformation: Preshaping Point Cloud Manifold With Proxy Attention For 3D Visual Grounding0
Text-guided Sparse Voxel Pruning for Efficient 3D Visual GroundingCode3
Evolving Symbolic 3D Visual Grounder with Weakly Supervised ReflectionCode1
AugRefer: Advancing 3D Visual Grounding via Cross-Modal Augmentation and Spatial Relation-based Referring0
ViGiL3D: A Linguistically Diverse Dataset for 3D Visual Grounding0
Beyond Human Perception: Understanding Multi-Object World from Monocular ViewCode0
Ges3ViG : Incorporating Pointing Gestures into Language-Based 3D Visual Grounding for Embodied Reference UnderstandingCode0
3D Spatial Understanding in MLLMs: Disambiguation and Evaluation0
SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding0
Show:102550
← PrevPage 1 of 4Next →

No leaderboard results yet.