SOTAVerified

Image Comprehension

Papers

Showing 149 of 49 papers

TitleStatusHype
Mini-Gemini: Mining the Potential of Multi-modality Vision Language ModelsCode7
Divot: Diffusion Powers Video Tokenizer for Comprehension and GenerationCode2
MMGenBench: Evaluating the Limits of LMMs from the Text-to-Image Generation PerspectiveCode2
StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video UnderstandingCode2
MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language ModelsCode2
Enhancing Large Vision Language Models with Self-Training on Image ComprehensionCode2
Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-ImprovementCode2
EarthGPT: A Universal Multi-modal Large Language Model for Multi-sensor Image Comprehension in Remote Sensing DomainCode2
Hierarchical Open-vocabulary Universal Image SegmentationCode2
JourneyDB: A Benchmark for Generative Image UnderstandingCode2
New Dataset and Methods for Fine-Grained Compositional Referring Expression Comprehension via Specialist-MLLM CollaborationCode1
RSUniVLM: A Unified Vision Language Model for Remote Sensing via Granularity-oriented Mixture of ExpertsCode1
FineCops-Ref: A new Dataset and Task for Fine-Grained Compositional Referring Expression ComprehensionCode1
Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsCode1
RegionBLIP: A Unified Multi-modal Pre-training Framework for Holistic and Regional ComprehensionCode1
ArtGPT-4: Towards Artistic-understanding Large Vision-Language Models with Enhanced AdapterCode1
CSVQA: A Chinese Multimodal Benchmark for Evaluating STEM Reasoning Capabilities of VLMs0
RGB-Th-Bench: A Dense benchmark for Visual-Thermal Understanding of Vision Language Models0
RAD: Retrieval-Augmented Decision-Making of Meta-Actions with Vision-Language Models in Autonomous Driving0
CMMCoT: Enhancing Complex Multi-Image Comprehension via Multi-Modal Chain-of-Thought and Memory Augmentation0
SimpleVQA: Multimodal Factuality Evaluation for Multimodal Large Language Models0
Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models0
RRHF-V: Ranking Responses to Mitigate Hallucinations in Multimodal Large Language Models with Human FeedbackCode0
EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM0
Survey of different Large Language Model Architectures: Trends, Benchmarks, and Challenges0
CLIC: Contrastive Learning Framework for Unsupervised Image Complexity RepresentationCode0
MIRe: Enhancing Multimodal Queries Representation via Fusion-Free Modality Interaction for Multimodal RetrievalCode0
Aquila: A Hierarchically Aligned Visual-Language Model for Enhanced Remote Sensing Image Comprehension0
Teach Multimodal LLMs to Comprehend Electrocardiographic Images0
FTII-Bench: A Comprehensive Multimodal Benchmark for Flow Text with Image InsertionCode0
FullAnno: A Data Engine for Enhancing Image Comprehension of MLLMs0
IW-Bench: Evaluating Large Multimodal Models for Converting Image-to-Web0
Alleviating Hallucination in Large Vision-Language Models with Active Retrieval Augmentation0
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and OutputCode0
Unveiling Glitches: A Deep Dive into Image Encoding Bugs within CLIP0
VGA: Vision GUI Assistant -- Minimizing Hallucinations through Image-Centric Fine-TuningCode0
Multiplane Prior Guided Few-Shot Aerial Scene Rendering0
MM-MATH: Advancing Multimodal Math Evaluation with Process Evaluation and Fine-grained ClassificationCode0
Rec-GPT4V: Multimodal Recommendation with Large Vision-Language Models0
Muffin or Chihuahua? Challenging Multimodal Large Language Models with Multipanel VQA0
SlideAVSR: A Dataset of Paper Explanation Videos for Audio-Visual Speech Recognition0
Hidden flaws behind expert-level accuracy of multimodal GPT-4 vision in medicine0
CoCoT: Contrastive Chain-of-Thought Prompting for Large Multimodal Models with Multiple Image InputsCode0
GeoLocator: a location-integrated large multimodal model for inferring geo-privacy0
What Large Language Models Bring to Text-rich VQA?0
On the Performance of Multimodal Language Models0
InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and CompositionCode0
Towards Practical and Efficient Image-to-Speech Captioning with Vision-Language Pre-training and Multi-modal Tokens0
An End-to-End OCR Text Re-organization Sequence Learning for Rich-text Detail Image Comprehension0
Show:102550

No leaderboard results yet.