SOTAVerified

Scene Understanding

Scene understanding involves interpreting the visual information of a scene, including objects, their spatial relationships, and the overall layout. It goes beyond simple object recognition by considering the context and how objects relate to each other and the environment.

Papers

Showing 150 of 1723 papers

TitleStatusHype
When LLMs step into the 3D World: A Survey and Meta-Analysis of 3D Tasks via Multi-modal Large Language ModelsCode7
Trajectory Prediction Meets Large Language Models: A SurveyCode5
OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision Language Action ModelCode4
Distill Any Depth: Distillation Creates a Stronger Monocular Depth EstimatorCode4
GPT4Scene: Understand 3D Scenes from Videos with Vision-Language ModelsCode4
Senna: Bridging Large Vision-Language Models and End-to-End Autonomous DrivingCode4
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language ModelsCode4
SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAMCode4
Repurposing Diffusion-Based Image Generators for Monocular Depth EstimationCode4
DFormerv2: Geometry Self-Attention for RGBD Semantic SegmentationCode3
SceneSplat: Gaussian Splatting-based Scene Understanding with Vision-Language PretrainingCode3
CrossOver: 3D Scene Cross-Modal AlignmentCode3
HERMES: A Unified Self-Driving World Model for Simultaneous 3D Scene Understanding and GenerationCode3
STORM: Spatio-Temporal Reconstruction Model for Large-Scale Outdoor ScenesCode3
EPRecon: An Efficient Framework for Real-Time Panoptic 3D Reconstruction from Monocular VideoCode3
DeepInteraction++: Multi-Modality Interaction for Autonomous DrivingCode3
AudioBench: A Universal Benchmark for Audio Large Language ModelsCode3
4D Panoptic Scene Graph GenerationCode3
Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous DrivingCode3
Sigma: Siamese Mamba Network for Multi-Modal Semantic SegmentationCode3
MoAI: Mixture of All Intelligence for Large Language and Vision ModelsCode3
Embodied Understanding of Driving ScenariosCode3
Swin3D++: Effective Multi-Source Pretraining for 3D Indoor Scene UnderstandingCode3
SGS-SLAM: Semantic Gaussian Splatting For Neural Dense SLAMCode3
GARField: Group Anything with Radiance FieldsCode3
Generalized Robot 3D Vision-Language Model with Fast Rendering and Pre-Training Vision-Language AlignmentCode3
Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal ModelsCode3
iDisc: Internal Discretization for Monocular Depth EstimationCode3
SurroundOcc: Multi-Camera 3D Occupancy Prediction for Autonomous DrivingCode3
Feed-Forward SceneDINO for Unsupervised Semantic Scene CompletionCode2
SIU3R: Simultaneous Scene Understanding and 3D Reconstruction Beyond Feature AlignmentCode2
Tackling View-Dependent Semantics in 3D Language Gaussian SplattingCode2
Scene-Centric Unsupervised Panoptic SegmentationCode2
Towards Generating Realistic 3D Semantic Training Data for Autonomous DrivingCode2
COB-GS: Clear Object Boundaries in 3DGS Segmentation Based on Boundary-Adaptive Gaussian SplittingCode2
SuperFlow++: Enhanced Spatiotemporal Consistency for Cross-Modal Data PretrainingCode2
PolarFree: Polarization-based Reflection-free ImagingCode2
IRef-VLA: A Benchmark for Interactive Referential Grounding with Imperfect Language in 3D ScenesCode2
Crab: A Unified Audio-Visual Scene Understanding Model with Explicit CooperationCode2
TrackOcc: Camera-based 4D Panoptic Occupancy TrackingCode2
Chameleon: Fast-slow Neuro-symbolic Lane Topology ExtractionCode2
An Egocentric Vision-Language Model based Portable Real-time Smart AssistantCode2
Inst3D-LMM: Instance-Aware 3D Scene Understanding with Multi-modal Instruction TuningCode2
NavRAG: Generating User Demand Instructions for Embodied Navigation through Retrieval-Augmented LLMCode2
VideoLifter: Lifting Videos to 3D with Fast Hierarchical Stereo AlignmentCode2
3DGraphLLM: Combining Semantic Graphs and Large Language Models for 3D Scene UnderstandingCode2
AutoTrust: Benchmarking Trustworthiness in Large Vision Language Models for Autonomous DrivingCode2
RelationField: Relate Anything in Radiance FieldsCode2
DINO-Foresight: Looking into the Future with DINOCode2
Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial ReasoningCode2
Show:102550
← PrevPage 1 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ACRV BaselineOMQ0.44Unverified
2Team VGAI (TCS Research)OMQ0.37Unverified
3Demo_semantic_SLAMOMQ0.11Unverified
#ModelMetricClaimedVerifiedStatus
1CPN(ResNet-101)Mean IoU46.3Unverified
#ModelMetricClaimedVerifiedStatus
1ACRV BaselineOMQ0.35Unverified