SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 701750 of 2177 papers

TitleStatusHype
SearchLVLMs: A Plug-and-Play Framework for Augmenting Large Vision-Language Models by Searching Up-to-Date Internet Knowledge0
Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer ModelsCode2
Calibrated Self-Rewarding Vision Language ModelsCode2
PitVQA: Image-grounded Text Embedding LLM for Visual Question Answering in Pituitary SurgeryCode1
Image-of-Thought Prompting for Visual Reasoning Refinement in Multimodal Large Language Models0
Dataset and Benchmark for Urdu Natural Scenes Text Detection, Recognition and Visual Question AnsweringCode0
MTVQA: Benchmarking Multilingual Text-Centric Visual Question AnsweringCode2
Imp: Highly Capable Large Multimodal Models for Mobile DevicesCode2
Inquire, Interact, and Integrate: A Proactive Agent Collaborative Framework for Zero-Shot Multimodal Medical Reasoning0
Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of ExpertsCode5
EyeFound: A Multimodal Generalist Foundation Model for Ophthalmic Imaging0
StackOverflowVQA: Stack Overflow Visual Question Answering Dataset0
Efficient Multimodal Large Language Models: A SurveyCode3
UniRAG: Universal Retrieval Augmentation for Large Vision Language ModelsCode1
Chameleon: Mixed-Modal Early-Fusion Foundation ModelsCode7
Xmodel-VLM: A Simple Baseline for Multimodal Vision Language ModelCode2
CLIP-Powered TASS: Target-Aware Single-Stream Network for Audio-Visual Question Answering0
Realizing Visual Question Answering for Education: GPT-4V as a Multimodal AI0
Federated Document Visual Question Answering: A Pilot StudyCode0
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-ExpertsCode2
Is the House Ready For Sleeptime? Generating and Evaluating Situational Queries for Embodied Question Answering0
VSA4VQA: Scaling a Vector Symbolic Architecture to Visual Question Answering on Natural Images0
Advancing Multimodal Medical Capabilities of Gemini0
Language-Image Models with 3D Understanding0
OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving with Counterfactual ReasoningCode4
Understanding Figurative Meaning through Explainable Visual EntailmentCode1
Beyond Human Vision: The Role of Large Vision Language Models in Microscope Image Analysis0
CREPE: Coordinate-Aware End-to-End Document Parser0
Enhanced Textual Feature Extraction for Visual Question Answering: A Simple Convolutional Approach0
TableVQA-Bench: A Visual Question Answering Benchmark on Multiple Table DomainsCode1
Multi-Page Document Visual Question Answering using Self-Attention Scoring MechanismCode0
ViOCRVQA: Novel Benchmark Dataset and Vision Reader for Visual Question Answering by Understanding Vietnamese Text in ImagesCode1
List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMsCode2
How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites0
Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models0
Fusion of Domain-Adapted Vision and Language Models for Medical Visual Question Answering0
Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs0
GSCo: Towards Generalizable AI in Medicine via Generalist-Specialist CollaborationCode2
Grounded Knowledge-Enhanced Medical VLP for Chest X-Ray0
WangLab at MEDIQA-M3G 2024: Multimodal Medical Answer Generation using Large Language Models0
Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question AnsweringCode0
Lost in Space: Probing Fine-grained Spatial Understanding in Vision and Language ResamplersCode0
Exploring Diverse Methods in Visual Question Answering0
LaPA: Latent Prompt Assist Model For Medical Visual Question AnsweringCode1
PDF-MVQA: A Dataset for Multimodal Information Retrieval in PDF-based Visual Question Answering0
Look Before You Decide: Prompting Active Deduction of MLLMs for Assumptive Reasoning0
TextSquare: Scaling up Text-Centric Visual Instruction Tuning0
Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question AnsweringCode1
MedThink: Explaining Medical Visual Question Answering via Multimodal Decision-Making Rationale0
Self-Supervised Visual Preference AlignmentCode2
Show:102550
← PrevPage 15 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified