SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 526550 of 2177 papers

TitleStatusHype
CausalChaos! Dataset for Comprehensive Causal Action Question Answering Over Longer Causal Chains Grounded in Dynamic Visual ScenesCode1
Gated Hierarchical Attention for Image CaptioningCode1
A Hitchhikers Guide to Fine-Grained Face Forgery Detection Using Common Sense ReasoningCode1
MultiChartQA: Benchmarking Vision-Language Models on Multi-Chart ProblemsCode1
Can We Talk Models Into Seeing the World Differently?Code1
mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connectionsCode1
MMXU: A Multi-Modal and Multi-X-ray Understanding Dataset for Disease ProgressionCode1
Found a Reason for me? Weakly-supervised Grounded Visual Question Answering using CapsulesCode1
Are Vision Language Models Ready for Clinical Diagnosis? A 3D Medical Benchmark for Tumor-centric Visual Question AnsweringCode1
3D-Aware Visual Question Answering about Parts, Poses and OcclusionsCode1
Florence: A New Foundation Model for Computer VisionCode1
Modular Visual Question Answering via Code GenerationCode1
Fine-grained Image Classification and Retrieval by Combining Visual and Locally Pooled Textual FeaturesCode1
EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Language ModelsCode1
Fine-Grained Evaluation of Large Vision-Language Models in Autonomous DrivingCode1
Faithful Multimodal Explanation for Visual Question AnsweringCode1
EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray ImagesCode1
MMFT-BERT: Multimodal Fusion Transformer with BERT Encodings for Visual Question AnsweringCode1
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?Code1
FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMsCode1
Expressive Scene Graph Generation Using Commonsense Knowledge Infusion for Visual Understanding and ReasoningCode1
Explaining Autonomous Driving Actions with Visual Question AnsweringCode1
Expert Knowledge-Aware Image Difference Graph Representation Learning for Difference-Aware Medical Visual Question AnsweringCode1
Foundation Model is Efficient Multimodal Multitask Model SelectorCode1
GPT-4V-AD: Exploring Grounding Potential of VQA-oriented GPT-4V for Zero-shot Anomaly DetectionCode1
Show:102550
← PrevPage 22 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified