SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 17511800 of 2177 papers

TitleStatusHype
Synthesize Step-by-Step: Tools Templates and LLMs as Data Generators for Reasoning-Based Chart VQA0
VLMAE: Vision-Language Masked Autoencoder0
VL-Mamba: Exploring State Space Models for Multimodal Learning0
T2I-FactualBench: Benchmarking the Factuality of Text-to-Image Models with Knowledge-Intensive Concepts0
VLM-Assisted Continual learning for Visual Question Answering in Self-Driving0
Benchmarking Vision Language Models for Cultural Understanding0
Benchmarking Large Multimodal Models for Ophthalmic Visual Question Answering with OphthalWeChat0
VLR-Bench: Multilingual Benchmark Dataset for Vision-Language Retrieval Augmented Generation0
EVJVQA Challenge: Multilingual Visual Question Answering0
Advancing Medical Imaging with Language Models: A Journey from N-grams to ChatGPT0
Tackling VQA with Pretrained Foundation Models without Further Training0
@Bench: Benchmarking Vision-Language Models for Human-centered Assistive Technology0
Take A Step Back: Rethinking the Two Stages in Visual Reasoning0
Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded0
Talking to the brain: Using Large Language Models as Proxies to Model Brain Semantic Representation0
A dataset of clinically generated visual questions and answers about radiology images0
VolDoGer: LLM-assisted Datasets for Domain Generalization in Vision-Language Tasks0
Task-driven Visual Saliency and Attention-based Visual Question Answering0
Task Formulation Matters When Learning Continuously: A Case Study in Visual Question Answering0
Adaptive Token Boundaries: Integrating Human Chunking Mechanisms into Multimodal LLMs0
Task-Oriented Feature Compression for Multimodal Understanding via Device-Edge Co-Inference0
Being Negative but Constructively: Lessons Learnt from Creating Better Visual Question Answering Datasets0
Task-Oriented Multi-User Semantic Communications0
Task-Oriented Semantic Communication in Large Multimodal Models-based Vehicle Networks0
Task Progressive Curriculum Learning for Robust Visual Question Answering0
TA-Student VQA: Multi-Agents Training by Self-Questioning0
AdaDARE-gamma: Balancing Stability and Plasticity in Multi-modal LLMs through Efficient Adaptation0
Bayesian Attention Belief Networks0
3D Concept Learning and Reasoning from Multi-View Images0
BARTPhoBEiT: Pre-trained Sequence-to-Sequence and Image Transformers Models for Vietnamese Visual Question Answering0
Tell-and-Answer: Towards Explainable Visual Question Answering using Attributes and Captions0
Tell Me the Evidence? Dual Visual-Linguistic Interaction for Answer Grounding0
VQA-Aid: Visual Question Answering for Post-Disaster Damage Assessment and Analysis0
Barriers in Integrating Medical Visual Question Answering into Radiology Workflows: A Scoping Review and Clinicians' Insights0
Test-Time Adaptation for Visual Document Understanding0
Text-Aware Dual Routing Network for Visual Question Answering0
Barking Up The Syntactic Tree: Enhancing VLM Training with Syntactic Losses0
Text-Conditioned Generative Model of 3D Strand-based Human Hairstyles0
Text-Guided Coarse-to-Fine Fusion Network for Robust Remote Sensing Visual Question Answering0
Balancing Performance and Efficiency in Zero-shot Robotic Navigation0
BACON: Improving Clarity of Image Captions via Bag-of-Concept Graphs0
TextMatch: Enhancing Image-Text Consistency Through Multimodal Optimization0
DuReader_vis: A Chinese Dataset for Open-domain Document Visual Question Answering0
TextSquare: Scaling up Text-Centric Visual Instruction Tuning0
Textually Enriched Neural Module Networks for Visual Question Answering0
TextVidBench: A Benchmark for Long Video Scene Text Understanding0
VQABQ: Visual Question Answering by Basic Questions0
Backdooring Vision-Language Models with Out-Of-Distribution Data0
A Visual Question Answering Method for SAR Ship: Breaking the Requirement for Multimodal Dataset Construction and Model Fine-Tuning0
The Color of the Cat is Gray: 1 Million Full-Sentences Visual Question Answering (FSVQA)0
Show:102550
← PrevPage 36 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified