SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 826850 of 2177 papers

TitleStatusHype
CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual ScenariosCode2
Are Language Models Puzzle Prodigies? Algorithmic Puzzles Unveil Serious Challenges in Multimodal ReasoningCode2
Modeling Collaborator: Enabling Subjective Vision Classification With Minimal Human Effort via LLM Tool-Use0
CLEVR-POC: Reasoning-Intensive Visual Question Answering in Partially Observable Environments0
Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language ModelsCode3
MOKA: Open-World Robotic Manipulation through Mark-Based Visual Prompting0
Enhancing Generalization in Medical Visual Question Answering Tasks via Gradient-Guided Model Perturbation0
Vision-Language Models for Medical Report Generation and Visual Question Answering: A ReviewCode3
InfiMM-HD: A Leap Forward in High-Resolution Multimodal Understanding0
The All-Seeing Project V2: Towards General Relation Comprehension of the Open WorldCode4
A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models0
ArcSin: Adaptive ranged cosine Similarity injected noise for Language-Driven Visual Tasks0
VCD: Knowledge Base Guided Visual Commonsense Discovery in Images0
Read and Think: An Efficient Step-wise Multimodal Language Model for Document Understanding and Reasoning0
LLM-Assisted Multi-Teacher Continual Learning for Visual Question Answering in Robotic SurgeryCode0
RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis0
Bridging the Gap between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQACode1
VISREAS: Complex Visual Reasoning with Unanswerable Questions0
Multimodal Transformer With a Low-Computational-Cost Guarantee0
CommVQA: Situating Visual Question Answering in Communicative ContextsCode0
Uncertainty-Aware Evaluation for Vision-Language ModelsCode1
Visual Hallucinations of Multi-modal Large Language ModelsCode1
TinyLLaVA: A Framework of Small-scale Large Multimodal ModelsCode4
Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension with Enhanced Visual Knowledge AlignmentCode1
Exploring the Frontier of Vision-Language Models: A Survey of Current Methodologies and Future Directions0
Show:102550
← PrevPage 34 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified