SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 10511100 of 2177 papers

TitleStatusHype
VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language ModelsCode1
Tackling VQA with Pretrained Foundation Models without Further Training0
Sentence Attention Blocks for Answer Grounding0
DreamLLM: Synergistic Multimodal Comprehension and CreationCode2
Visual Question Answering in the Medical Domain0
KOSMOS-2.5: A Multimodal Literate Model0
An Empirical Study of Scaling Instruct-Tuned Large Multimodal ModelsCode6
Syntax Tree Constrained Graph Network for Visual Question Answering0
D3: Data Diversity Design for Systematic Generalization in Visual Question AnsweringCode0
TextBind: Multi-turn Interleaved Multimodal Instruction-following in the WildCode1
Rank2Tell: A Multimodal Driving Dataset for Joint Importance Ranking and Reasoning0
Interpretable Visual Question Answering via Reasoning Supervision0
Evaluation and Enhancement of Semantic Grounding in Large Vision-Language Models0
A Survey on Interpretable Cross-modal ReasoningCode1
Physically Grounded Vision-Language Models for Robotic Manipulation0
Towards Addressing the Misalignment of Object Proposal Evaluation for Vision-Language Tasks via Semantic GroundingCode0
Expanding Frozen Vision-Language Models without Retraining: Towards Improved Robot Perception0
Separate and Locate: Rethink the Text in Text-based Visual Question AnsweringCode0
UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and MemoryCode1
Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIPCode1
DLIP: Distilling Language-Image Pre-training0
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and BeyondCode5
InstructionGPT-4: A 200-Instruction Paradigm for Fine-Tuning MiniGPT-4Code1
EVE: Efficient Vision-Language Pre-training with Masked Prediction and Modality-Aware MoE0
VQA Therapy: Exploring Answer Differences by Visually Grounding AnswersCode0
SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes0
Generic Attention-model Explainability by Weighted Relevance Accumulation0
StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized Image-Dialogue DataCode1
BLIVA: A Simple Multimodal LLM for Better Handling of Text-Rich Visual QuestionsCode2
Towards Grounded Visual Spatial Reasoning in Multi-Modal Vision Language Models0
Uni-NLX: Unifying Textual Explanations for Vision and Vision-Language TasksCode1
Learning the meanings of function words from grounded language using a visual question answering modelCode0
Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme DetectionCode1
TeCH: Text-guided Reconstruction of Lifelike Clothed HumansCode2
Foundation Model is Efficient Multimodal Multitask Model SelectorCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
Progressive Spatio-temporal Perception for Audio-Visual Question AnsweringCode1
TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal Backdoored ModelsCode0
SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific GraphsCode1
Towards Generalist Foundation Model for Radiology by Leveraging Web-scale 2D&3D Medical DataCode2
RealCQA: Scientific Chart Question Answering as a Test-bed for First-Order LogicCode0
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language ModelsCode4
ELIXR: Towards a general purpose X-ray artificial intelligence system through alignment of large language models and radiology vision encoders0
Context-VQA: Towards Context-Aware and Purposeful Visual Question AnsweringCode0
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic ControlCode2
BARTPhoBEiT: Pre-trained Sequence-to-Sequence and Image Transformers Models for Vietnamese Visual Question Answering0
Med-Flamingo: a Multimodal Medical Few-shot LearnerCode2
LOIS: Looking Out of Instance Semantics for Visual Question Answering0
Expert Knowledge-Aware Image Difference Graph Representation Learning for Difference-Aware Medical Visual Question AnsweringCode1
Robust Visual Question Answering: Datasets, Methods, and Future Challenges0
Show:102550
← PrevPage 22 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified