SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 11511200 of 2177 papers

TitleStatusHype
CREPE: Coordinate-Aware End-to-End Document Parser0
Beyond Human Vision: The Role of Large Vision Language Models in Microscope Image Analysis0
Multi-Page Document Visual Question Answering using Self-Attention Scoring MechanismCode0
Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models0
How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites0
Fusion of Domain-Adapted Vision and Language Models for Medical Visual Question Answering0
Grounded Knowledge-Enhanced Medical VLP for Chest X-Ray0
Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs0
Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question AnsweringCode0
WangLab at MEDIQA-M3G 2024: Multimodal Medical Answer Generation using Large Language Models0
Lost in Space: Probing Fine-grained Spatial Understanding in Vision and Language ResamplersCode0
Exploring Diverse Methods in Visual Question Answering0
PDF-MVQA: A Dataset for Multimodal Information Retrieval in PDF-based Visual Question Answering0
Look Before You Decide: Prompting Active Deduction of MLLMs for Assumptive Reasoning0
TextSquare: Scaling up Text-Centric Visual Instruction Tuning0
MedThink: Explaining Medical Visual Question Answering via Multimodal Decision-Making Rationale0
Consistency and Uncertainty: Identifying Unreliable Responses From Black-Box Vision-Language Models for Selective Visual Question Answering0
ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in ImagesCode0
Find The Gap: Knowledge Base Reasoning For Visual Question Answering0
HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision0
Bridging Vision and Language Spaces with Assignment PredictionCode0
Language Models Meet Anomaly Detection for Better Interpretability and GeneralizabilityCode0
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMsCode0
InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HDCode0
OmniFusion Technical ReportCode0
HAMMR: HierArchical MultiModal React agents for generic VQA0
Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement0
Soft-Prompting with Graph-of-Thought for Multi-modal Representation LearningCode0
Joint Visual and Text Prompting for Improved Object-Centric Perception with Multimodal Large Language ModelsCode0
BuDDIE: A Business Document Dataset for Multi-task Information Extraction0
TinyVQA: Compact Multimodal Deep Neural Network for Visual Question Answering on Resource-Constrained Devices0
Enhancing Human-Computer Interaction in Chest X-ray Analysis using Vision and Language Model with Eye Gaze Patterns0
Detect2Interact: Localizing Object Key Field in Visual Question Answering (VQA) with LLMs0
Learning by Correction: Efficient Tuning Task for Zero-Shot Generative Vision-Language ReasoningCode0
Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-trainingCode0
Uncovering Bias in Large Vision-Language Models with Counterfactuals0
A Gaze-grounded Visual Question Answering Dataset for Clarifying Ambiguous Japanese Questions0
Visual Hallucination: Definition, Quantification, and Prescriptive Remediations0
Intrinsic Subgraph Generation for Interpretable Graph based Visual Question AnsweringCode0
Synthesize Step-by-Step: Tools, Templates and LLMs as Data Generators for Reasoning-Based Chart VQA0
PropTest: Automatic Property Testing for Improved Visual Programming0
Surgical-LVLM: Learning to Adapt Large Vision-Language Model for Grounded Visual Question Answering in Robotic Surgery0
MyVLM: Personalizing VLMs for User-Specific Queries0
VL-Mamba: Exploring State Space Models for Multimodal Learning0
Improved Baselines for Data-efficient Perceptual Augmentation of LLMs0
As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?0
WoLF: Wide-scope Large Language Model Framework for CXR Understanding0
FlexCap: Describe Anything in Images in Controllable Detail0
Can LLMs Generate Human-Like Wayfinding Instructions? Towards Platform-Agnostic Embodied Instruction Synthesis0
SpatialPIN: Enhancing Spatial Reasoning Capabilities of Vision-Language Models through Prompting and Interacting 3D Priors0
Show:102550
← PrevPage 24 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified