SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 201250 of 2177 papers

TitleStatusHype
TeCH: Text-guided Reconstruction of Lifelike Clothed HumansCode2
TEOChat: A Large Vision-Language Assistant for Temporal Earth Observation DataCode2
LLaMA-VID: An Image is Worth 2 Tokens in Large Language ModelsCode2
Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer ModelsCode2
LLaVA-Plus: Learning to Use Tools for Creating Multimodal AgentsCode2
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language ModelsCode2
Are Language Models Puzzle Prodigies? Algorithmic Puzzles Unveil Serious Challenges in Multimodal ReasoningCode2
LinVT: Empower Your Image-level Large Language Model to Understand VideosCode2
Patho-R1: A Multimodal Reinforcement Learning-Based Pathology Expert ReasonerCode2
TroL: Traversal of Layers for Large Language and Vision ModelsCode2
List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMsCode2
LingoQA: Visual Question Answering for Autonomous DrivingCode2
Large Continual Instruction AssistantCode2
EyeCLIP: A visual-language foundation model for multi-modal ophthalmic image analysisCode2
Calibrated Self-Rewarding Vision Language ModelsCode2
JourneyDB: A Benchmark for Generative Image UnderstandingCode2
Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context SparsificationCode2
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language ModelCode2
DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal PerceptionCode2
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question AnsweringCode1
AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image DetectorsCode1
A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question AnsweringCode1
Interpreting Chest X-rays Like a Radiologist: A Benchmark with Clinical ReasoningCode1
InstructionGPT-4: A 200-Instruction Paradigm for Fine-Tuning MiniGPT-4Code1
Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction TuningCode1
Instruction-Guided Visual MaskingCode1
Boosting Audio Visual Question Answering via Key Semantic-Aware CuesCode1
A Comparison of Pre-trained Vision-and-Language Models for Multimodal Representation Learning across Medical Images and ReportsCode1
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset BiasesCode1
InfMLLM: A Unified Framework for Visual-Language TasksCode1
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic TasksCode1
Distilled Dual-Encoder Model for Vision-Language UnderstandingCode1
Improving Selective Visual Question Answering by Learning from Your PeersCode1
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and LanguagesCode1
IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language ModelsCode1
Disentangling 3D Prototypical Networks For Few-Shot Concept LearningCode1
IMPACT: A Large-scale Integrated Multimodal Patent Analysis and Creation Dataset for Design PatentsCode1
In Defense of Grid Features for Visual Question AnsweringCode1
Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question AnsweringCode1
DocVQA: A Dataset for VQA on Document ImagesCode1
I2I: Initializing Adapters with Improvised KnowledgeCode1
Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question AnsweringCode1
I Can't Believe There's No Images! Learning Visual Tasks Using only Language SupervisionCode1
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question AnsweringCode1
Does Vision-and-Language Pretraining Improve Lexical Grounding?Code1
DeVLBert: Learning Deconfounded Visio-Linguistic RepresentationsCode1
Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question AnsweringCode1
HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language ModelsCode1
Lever LM: Configuring In-Context Sequence to Lever Large Vision Language ModelsCode1
How Much Can CLIP Benefit Vision-and-Language Tasks?Code1
Show:102550
← PrevPage 5 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified