SOTAVerified

Scene Understanding

Scene understanding involves interpreting the visual information of a scene, including objects, their spatial relationships, and the overall layout. It goes beyond simple object recognition by considering the context and how objects relate to each other and the environment.

Papers

Showing 101150 of 1723 papers

TitleStatusHype
Leveraging Automatic CAD Annotations for Supervised Learning in 3D Scene UnderstandingCode0
HAECcity: Open-Vocabulary Scene Understanding of City-Scale Point Clouds with Superpoint Graph Clustering0
Training-Free Hierarchical Scene Understanding for Gaussian Splatting with Superpoint GraphsCode1
Explainable Scene Understanding with Qualitative Representations and Graph Neural Networks0
DC-SAM: In-Context Segment Anything in Images and Videos via Dual ConsistencyCode1
CAGS: Open-Vocabulary 3D Scene Understanding with Context-Aware Gaussian Splatting0
Single-Input Multi-Output Model Merging: Leveraging Foundation Models for Dense Multi-Task Learning0
Foundation Models for Remote Sensing: An Analysis of MLLMs for Object Localization0
SoccerNet-v3D: Leveraging Sports Broadcast Replays for 3D Scene UnderstandingCode1
FindAnything: Open-Vocabulary and Object-Centric Mapping for Robot Exploration in Any Environment0
FMLGS: Fast Multilevel Language Embedded Gaussians for Part-level Interactive Agents0
DSM: Building A Diverse Semantic Map for 3D Visual Grounding0
DGOcc: Depth-aware Global Query-based Network for Monocular 3D Occupancy Prediction0
Masked Scene Modeling: Narrowing the Gap Between Supervised and Self-Supervised Learning in 3D Scene UnderstandingCode1
MovSAM: A Single-image Moving Object Segmentation Framework Based on Deep ThinkingCode0
RayFronts: Open-Set Semantic Ray Frontiers for Online Scene Understanding and Exploration0
Attributes-aware Visual Emotion Representation Learning0
Audio-visual Event Localization on Portrait Mode Short Videos0
PRIMEDrive-CoT: A Precognitive Chain-of-Thought Framework for Uncertainty-Aware Object Interaction in Driving Scene Scenario0
CamContextI2V: Context-aware Controllable Video GenerationCode1
RS-RAG: Bridging Remote Sensing Imagery and Comprehensive Knowledge with a Multi-Modal Dataset and Retrieval-Augmented Generation Model0
DFormerv2: Geometry Self-Attention for RGBD Semantic SegmentationCode3
Planning Safety Trajectories with Dual-Phase, Physics-Informed, and Transportation Knowledge-Driven Large Language ModelsCode0
Multimodal Fusion and Vision-Language Models: A Survey for Robot VisionCode1
F-ViTA: Foundation Model Guided Visible to Thermal TranslationCode1
Overlap-Aware Feature Learning for Robust Unsupervised Domain Adaptation for 3D Semantic Segmentation0
CoMatcher: Multi-View Collaborative Feature Matching0
TransforMerger: Transformer-based Voice-Gesture Fusion for Robust Human-Robot Communication0
Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness0
Scene-Centric Unsupervised Panoptic SegmentationCode2
WikiVideo: Article Generation from Multiple VideosCode1
Zero-Shot 4D Lidar Panoptic Segmentation0
Context-Aware Human Behavior Prediction Using Multimodal Large Language Models: Challenges and Insights0
PhysPose: Refining 6D Object Poses with Physical Constraints0
Boosting Omnidirectional Stereo Matching with a Pre-trained Depth Foundation ModelCode1
OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision Language Action ModelCode4
Empowering Large Language Models with 3D Situation Awareness0
Evaluating Compositional Scene Understanding in Multimodal Generative ModelsCode0
Can DeepSeek Reason Like a Surgeon? An Empirical Evaluation for Vision-Language Understanding in Robotic-Assisted Surgery0
Open-Vocabulary Semantic Segmentation with Uncertainty Alignment for Robotic Scene Understanding in Indoor Building Environments0
Mitigating Trade-off: Stream and Query-guided Aggregation for Efficient and Effective 3D Occupancy PredictionCode1
Evaluating Multimodal Language Models as Visual Assistants for Visually Impaired Users0
A Dataset for Semantic Segmentation in the Presence of Unknowns0
Endo-TTAP: Robust Endoscopic Tissue Tracking via Multi-Facet Guided Attention and Hybrid Flow-point Supervision0
Next-Best-Trajectory Planning of Robot Manipulators for Effective Observation and Exploration0
NuGrounding: A Multi-View 3D Visual Grounding Framework in Autonomous Driving0
Visual Jenga: Discovering Object Dependencies via Counterfactual Inpainting0
Towards Generating Realistic 3D Semantic Training Data for Autonomous DrivingCode2
DINeMo: Learning Neural Mesh Models with no 3D Annotations0
COB-GS: Clear Object Boundaries in 3DGS Segmentation Based on Boundary-Adaptive Gaussian SplittingCode2
Show:102550
← PrevPage 3 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ACRV BaselineOMQ0.44Unverified
2Team VGAI (TCS Research)OMQ0.37Unverified
3Demo_semantic_SLAMOMQ0.11Unverified
#ModelMetricClaimedVerifiedStatus
1CPN(ResNet-101)Mean IoU46.3Unverified
#ModelMetricClaimedVerifiedStatus
1ACRV BaselineOMQ0.35Unverified