SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 551600 of 2177 papers

TitleStatusHype
Can We Talk Models Into Seeing the World Differently?Code1
Language Quantized AutoEncoders: Towards Unsupervised Text-Image AlignmentCode1
LaPA: Latent Prompt Assist Model For Medical Visual Question AnsweringCode1
Large-Scale Adversarial Training for Vision-and-Language Representation LearningCode1
Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art BaselineCode1
LaTr: Layout-Aware Transformer for Scene-Text VQACode1
Learning Cooperative Visual Dialog Agents with Deep Reinforcement LearningCode1
Learning Situation Hyper-Graphs for Video Question AnsweringCode1
INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language ModelCode1
EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray ImagesCode1
Less is More: A Simple yet Effective Token Reduction Method for Efficient Multi-modal LLMsCode1
InfMLLM: A Unified Framework for Visual-Language TasksCode1
InstructionGPT-4: A 200-Instruction Paradigm for Fine-Tuning MiniGPT-4Code1
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?Code1
IMPACT: A Large-scale Integrated Multimodal Patent Analysis and Creation Dataset for Design PatentsCode1
IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language ModelsCode1
Improving Selective Visual Question Answering by Learning from Your PeersCode1
IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language ReasoningCode1
Lever LM: Configuring In-Context Sequence to Lever Large Vision Language ModelsCode1
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and LanguagesCode1
GPT-4V-AD: Exploring Grounding Potential of VQA-oriented GPT-4V for Zero-shot Anomaly DetectionCode1
I Can't Believe There's No Images! Learning Visual Tasks Using only Language SupervisionCode1
ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document UnderstandingCode1
Localized Questions in Medical Visual Question AnsweringCode1
In Defense of Grid Features for Visual Question AnsweringCode1
How Much Can CLIP Benefit Vision-and-Language Tasks?Code1
Maintaining Reasoning Consistency in Compositional Visual Question AnsweringCode1
Making Large Language Models Better Data CreatorsCode1
FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMsCode1
Evaluating Image Hallucination in Text-to-Image Generation with Question-AnsweringCode1
How Do Multimodal Large Language Models Handle Complex Multimodal Reasoning? Placing Them in An Extensible Escape GameCode1
How to Configure Good In-Context Sequence for Visual Question AnsweringCode1
Hierarchical multimodal transformers for Multi-Page DocVQACode1
Faithful Multimodal Explanation for Visual Question AnsweringCode1
HaloQuest: A Visual Hallucination Dataset for Advancing Multimodal ReasoningCode1
Hierarchical Question-Image Co-Attention for Visual Question AnsweringCode1
Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question AnsweringCode1
EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question AnsweringCode1
HAAR: Text-Conditioned Generative Model of 3D Strand-based Human HairstylesCode1
GRIT: General Robust Image Task BenchmarkCode1
CaMML: Context-Aware Multimodal Learner for Large ModelsCode1
Hallucination Augmented Contrastive Learning for Multimodal Large Language ModelCode1
HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language ModelsCode1
A-OKVQA: A Benchmark for Visual Question Answering using World KnowledgeCode1
Dynamic Multimodal Evaluation with Flexible Complexity by Vision-Language BootstrappingCode1
Graphhopper: Multi-Hop Scene Graph Reasoning for Visual Question AnsweringCode1
Calibrating Concepts and Operations: Towards Symbolic Reasoning on Real ImagesCode1
MemeCap: A Dataset for Captioning and Interpreting MemesCode1
GraghVQA: Language-Guided Graph Neural Networks for Graph-based Visual Question AnsweringCode1
Graph Optimal Transport for Cross-Domain AlignmentCode1
Show:102550
← PrevPage 12 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified