SOTAVerified

Scene Understanding

Scene understanding involves interpreting the visual information of a scene, including objects, their spatial relationships, and the overall layout. It goes beyond simple object recognition by considering the context and how objects relate to each other and the environment.

Papers

Showing 201250 of 1723 papers

TitleStatusHype
Label-Efficient LiDAR Panoptic Segmentation0
vS-Graphs: Integrating Visual SLAM and Situational Graphs through Multi-level Scene Understanding0
Every SAM Drop Counts: Embracing Semantic Priors for Multi-Modality Image Fusion and Beyond0
OpenGS-SLAM: Open-Set Dense Semantic SLAM with 3D Gaussian Splatting for Object-Level Scene Understanding0
Inst3D-LMM: Instance-Aware 3D Scene Understanding with Multi-modal Instruction TuningCode2
Floorplan-SLAM: A Real-Time, High-Accuracy, and Long-Term Multi-Session Point-Plane SLAM for Efficient Floorplan Reconstruction0
Distill Any Depth: Distillation Creates a Stronger Monocular Depth EstimatorCode4
VLM-E2E: Enhancing End-to-End Autonomous Driving with Multimodal Driver Attention Fusion0
AAD-LLM: Neural Attention-Driven Auditory Scene Understanding0
Dr. Splat: Directly Referring 3D Gaussian Splatting via Direct Language Embedding Registration0
Hierarchical Context Transformer for Multi-level Semantic Scene UnderstandingCode0
CrossOver: 3D Scene Cross-Modal AlignmentCode3
AVD2: Accident Video Diffusion for Accident Video Description0
Sce2DriveX: A Generalized MLLM Framework for Scene-to-Drive Learning0
Understanding and Evaluating Hallucinations in 3D Visual Language Models0
Surgical Scene Understanding in the Era of Foundation AI Models: A Comprehensive Review0
NavRAG: Generating User Demand Instructions for Embodied Navigation through Retrieval-Augmented LLMCode2
Occlusion-aware Non-Rigid Point Cloud Registration via Unsupervised Neural Deformation CorrentropyCode1
FLARES: Fast and Accurate LiDAR Multi-Range Semantic Segmentation0
3D-Grounded Vision-Language Framework for Robotic Task Planning: Automated Prompt Synthesis and Supervised Reasoning0
sshELF: Single-Shot Hierarchical Extrapolation of Latent Features for 3D Reconstruction from Sparse-Views0
Mosaic3D: Foundation Dataset and Model for Open-Vocabulary 3D Segmentation0
Event-aided Semantic Scene CompletionCode1
AquaticCLIP: A Vision-Language Foundation Model for Underwater Scene Analysis0
Integrating LMM Planners and 3D Skill Policies for Generalizable Manipulation0
Efficient Interactive 3D Multi-Object Removal0
Contextual Self-paced Learning for Weakly Supervised Spatio-Temporal Video Grounding0
PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding0
Unveiling the Potential of iMarkers: Invisible Fiducial Markers for Advanced Robotics0
HERMES: A Unified Self-Driving World Model for Simultaneous 3D Scene Understanding and GenerationCode3
Scene Understanding Enabled Semantic Communication with Open Channel Coding0
GeomGS: LiDAR-Guided Geometry-Aware Gaussian Splatting for Robot Localization0
Separated Inter/Intra-Modal Fusion Prompts for Compositional Zero-Shot Learning0
Neural Radiance Fields for the Real World: A Survey0
EndoChat: Grounded Multimodal Large Language Model for Endoscopic SurgeryCode1
Dynamic Scene Understanding from Vision-Language Representations0
A Survey of World Models for Autonomous DrivingCode1
A Vision-Language Framework for Multispectral Scene Representation Using Language-Grounded Features0
CrossModalityDiffusion: Multi-Modal Novel View Synthesis with Unified Intermediate RepresentationCode0
YETI (YET to Intervene) Proactive Interventions by Multimodal AI Agents in Augmented Reality Tasks0
Embodied Scene Understanding for Vision Language Models via MetaVQA0
3UR-LLM: An End-to-End Multimodal Large Language Model for 3D Scene UnderstandingCode1
Hierarchical Superpixel Segmentation via Structural Information TheoryCode0
Zero-Shot Scene Understanding for Automatic Target Recognition Using Large Vision-Language Models0
Application of Vision-Language Model to Pedestrians Behavior and Scene Understanding in Autonomous Driving0
Self-Supervised Partial Cycle-Consistency for Multi-View MatchingCode0
UniQ: Unified Decoder with Task-specific Queries for Efficient Scene Graph Generation0
Vision-Language Models for Autonomous Driving: CLIP-Based Dynamic Scene Understanding0
A Systematic Literature Review on Deep Learning-based Depth Estimation in Computer Vision0
NextStop: An Improved Tracker For Panoptic LIDAR Segmentation DataCode0
Show:102550
← PrevPage 5 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ACRV BaselineOMQ0.44Unverified
2Team VGAI (TCS Research)OMQ0.37Unverified
3Demo_semantic_SLAMOMQ0.11Unverified
#ModelMetricClaimedVerifiedStatus
1CPN(ResNet-101)Mean IoU46.3Unverified
#ModelMetricClaimedVerifiedStatus
1ACRV BaselineOMQ0.35Unverified