SOTAVerified

Scene Understanding

Scene understanding involves interpreting the visual information of a scene, including objects, their spatial relationships, and the overall layout. It goes beyond simple object recognition by considering the context and how objects relate to each other and the environment.

Papers

Showing 5175 of 1723 papers

TitleStatusHype
A Survey on Open-Vocabulary Detection and Segmentation: Past, Present, and FutureCode2
Chameleon: Fast-slow Neuro-symbolic Lane Topology ExtractionCode2
MTVQA: Benchmarking Multilingual Text-Centric Visual Question AnsweringCode2
InvPT: Inverted Pyramid Multi-task Transformer for Dense Scene UnderstandingCode2
CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with TransformersCode2
CLIP goes 3D: Leveraging Prompt Tuning for Language Grounded 3D RecognitionCode2
HAKE: A Knowledge Engine Foundation for Human Activity UnderstandingCode2
On the Road with GPT-4V(ision): Early Explorations of Visual-Language Model on Autonomous DrivingCode2
OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance SegmentationCode2
OpenMask3D: Open-Vocabulary 3D Instance SegmentationCode2
Is Your LiDAR Placement Optimized for 3D Scene Understanding?Code2
OSMLoc: Single Image-Based Visual Localization in OpenStreetMap with Fused Geometric and Semantic GuidanceCode2
GroupViT: Semantic Segmentation Emerges from Text SupervisionCode2
A Unified Framework for 3D Scene UnderstandingCode2
Grounded 3D-LLM with Referent TokensCode2
Hier-SLAM: Scaling-up Semantics in SLAM with a Hierarchically Categorical Gaussian SplattingCode2
InvPT++: Inverted Pyramid Multi-Task Transformer for Visual Scene UnderstandingCode2
Gaussian Grouping: Segment and Edit Anything in 3D ScenesCode2
Calib3D: Calibrating Model Preferences for Reliable 3D Scene UnderstandingCode2
GaussianPretrain: A Simple Unified 3D Gaussian Representation for Visual Pre-training in Autonomous DrivingCode2
FusionVision: A comprehensive approach of 3D object reconstruction and segmentation from RGB-D cameras using YOLO and fast segment anythingCode2
Feed-Forward SceneDINO for Unsupervised Semantic Scene CompletionCode2
GALIP: Generative Adversarial CLIPs for Text-to-Image SynthesisCode2
Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial ReasoningCode2
An Egocentric Vision-Language Model based Portable Real-time Smart AssistantCode2
Show:102550
← PrevPage 3 of 69Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ACRV BaselineOMQ0.44Unverified
2Team VGAI (TCS Research)OMQ0.37Unverified
3Demo_semantic_SLAMOMQ0.11Unverified
#ModelMetricClaimedVerifiedStatus
1CPN(ResNet-101)Mean IoU46.3Unverified
#ModelMetricClaimedVerifiedStatus
1ACRV BaselineOMQ0.35Unverified