SOTAVerified

Image Comprehension

Papers

Showing 125 of 49 papers

TitleStatusHype
Mini-Gemini: Mining the Potential of Multi-modality Vision Language ModelsCode7
Divot: Diffusion Powers Video Tokenizer for Comprehension and GenerationCode2
MMGenBench: Evaluating the Limits of LMMs from the Text-to-Image Generation PerspectiveCode2
StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video UnderstandingCode2
MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language ModelsCode2
Enhancing Large Vision Language Models with Self-Training on Image ComprehensionCode2
Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-ImprovementCode2
EarthGPT: A Universal Multi-modal Large Language Model for Multi-sensor Image Comprehension in Remote Sensing DomainCode2
Hierarchical Open-vocabulary Universal Image SegmentationCode2
JourneyDB: A Benchmark for Generative Image UnderstandingCode2
New Dataset and Methods for Fine-Grained Compositional Referring Expression Comprehension via Specialist-MLLM CollaborationCode1
RSUniVLM: A Unified Vision Language Model for Remote Sensing via Granularity-oriented Mixture of ExpertsCode1
FineCops-Ref: A new Dataset and Task for Fine-Grained Compositional Referring Expression ComprehensionCode1
Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsCode1
RegionBLIP: A Unified Multi-modal Pre-training Framework for Holistic and Regional ComprehensionCode1
ArtGPT-4: Towards Artistic-understanding Large Vision-Language Models with Enhanced AdapterCode1
CSVQA: A Chinese Multimodal Benchmark for Evaluating STEM Reasoning Capabilities of VLMs0
RGB-Th-Bench: A Dense benchmark for Visual-Thermal Understanding of Vision Language Models0
RAD: Retrieval-Augmented Decision-Making of Meta-Actions with Vision-Language Models in Autonomous Driving0
CMMCoT: Enhancing Complex Multi-Image Comprehension via Multi-Modal Chain-of-Thought and Memory Augmentation0
SimpleVQA: Multimodal Factuality Evaluation for Multimodal Large Language Models0
Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models0
RRHF-V: Ranking Responses to Mitigate Hallucinations in Multimodal Large Language Models with Human FeedbackCode0
EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM0
Survey of different Large Language Model Architectures: Trends, Benchmarks, and Challenges0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.