SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 10761100 of 2177 papers

TitleStatusHype
SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes0
Generic Attention-model Explainability by Weighted Relevance Accumulation0
StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized Image-Dialogue DataCode1
BLIVA: A Simple Multimodal LLM for Better Handling of Text-Rich Visual QuestionsCode2
Towards Grounded Visual Spatial Reasoning in Multi-Modal Vision Language Models0
Uni-NLX: Unifying Textual Explanations for Vision and Vision-Language TasksCode1
Learning the meanings of function words from grounded language using a visual question answering modelCode0
Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme DetectionCode1
TeCH: Text-guided Reconstruction of Lifelike Clothed HumansCode2
Foundation Model is Efficient Multimodal Multitask Model SelectorCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
Progressive Spatio-temporal Perception for Audio-Visual Question AnsweringCode1
TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal Backdoored ModelsCode0
SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific GraphsCode1
Towards Generalist Foundation Model for Radiology by Leveraging Web-scale 2D&3D Medical DataCode2
RealCQA: Scientific Chart Question Answering as a Test-bed for First-Order LogicCode0
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language ModelsCode4
ELIXR: Towards a general purpose X-ray artificial intelligence system through alignment of large language models and radiology vision encoders0
Context-VQA: Towards Context-Aware and Purposeful Visual Question AnsweringCode0
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic ControlCode2
BARTPhoBEiT: Pre-trained Sequence-to-Sequence and Image Transformers Models for Vietnamese Visual Question Answering0
Med-Flamingo: a Multimodal Medical Few-shot LearnerCode2
LOIS: Looking Out of Instance Semantics for Visual Question Answering0
Expert Knowledge-Aware Image Difference Graph Representation Learning for Difference-Aware Medical Visual Question AnsweringCode1
Robust Visual Question Answering: Datasets, Methods, and Future Challenges0
Show:102550
← PrevPage 44 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified