SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 11011150 of 2177 papers

TitleStatusHype
Explaining Autonomous Driving Actions with Visual Question AnsweringCode1
A reinforcement learning approach for VQA validation: an application to diabetic macular edema grading0
Generative Visual Question Answering0
Towards a performance analysis on pre-trained Visual Question Answering models for autonomous drivingCode0
Let's ViCE! Mimicking Human Cognitive Behavior in Image Generation Evaluation0
PAT: Parallel Attention Transformer for Visual Question Answering in Vietnamese0
A scoping review on multimodal deep learning in biomedical images and texts0
MMBench: Is Your Multi-modal Model an All-around Player?Code5
Rad-ReStruct: A Novel VQA Benchmark and Method for Structured Radiology ReportingCode1
CAT-ViL: Co-Attention Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic SurgeryCode1
Emu: Generative Pretraining in MultimodalityCode3
Self-Adaptive Sampling for Efficient Video Question-Answering on Image--Text ModelsCode1
GPT4RoI: Instruction Tuning Large Language Model on Region-of-InterestCode2
Structure Guided Multi-modal Pre-trained Transformer for Knowledge Graph Reasoning0
UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image Enhancement for Gastrointestinal Visual Question Answering0
JourneyDB: A Benchmark for Generative Image UnderstandingCode2
Localized Questions in Medical Visual Question AnsweringCode1
Multimodal Prompt Retrieval for Generative Visual Question AnsweringCode1
Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question AnsweringCode1
Pre-Training Multi-Modal Dense Retrievers for Outside-Knowledge Visual Question AnsweringCode0
Shikra: Unleashing Multimodal LLM's Referential Dialogue MagicCode2
Kosmos-2: Grounding Multimodal Large Language Models to the WorldCode1
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction TuningCode2
Visual Question Answering in Remote Sensing with Cross-Attention and Multimodal Information Bottleneck0
Switch-BERT: Learning to Model Multimodal Interactions by Switching Attention and Input0
TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible AdapterCode0
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question AnsweringCode1
Encyclopedic VQA: Visual questions about detailed properties of fine-grained categories0
LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language ModelsCode2
Improving Selective Visual Question Answering by Learning from Your PeersCode1
Scalable Neural-Probabilistic Answer Set ProgrammingCode1
Visual Question Answering (VQA) on Images with Superimposed Text0
Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP TrainingCode0
AVIS: Autonomous Visual Information Seeking with Large Language Model Agent0
Global and Local Semantic Completion Learning for Vision-Language Pre-trainingCode1
A Survey of Vision-Language Pre-training from the Lens of Multimodal Machine Translation0
Multi-modal Pre-training for Medical Vision-language Understanding and Generation: An Empirical Study with A New BenchmarkCode1
Knowledge Detection by Relevant Question and Image Attributes in Visual Question Answering0
Modular Visual Question Answering via Code GenerationCode1
MIMIC-IT: Multi-Modal In-Context Instruction TuningCode4
Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA Tasks? A: Self-Train on Unlabeled Images!Code1
Diversifying Joint Vision-Language Tokenization Learning0
An Approach to Solving the Abstraction and Reasoning Corpus (ARC) ChallengeCode1
Multi-CLIP: Contrastive Vision-Language Pre-training for Question Answering tasks in 3D Scenes0
Revisiting the Role of Language Priors in Vision-Language ModelsCode1
LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One DayCode4
Evaluating the Capabilities of Multi-modal Reasoning Models with Synthetic Task Data0
Overcoming Language Bias in Remote Sensing Visual Question Answering via Adversarial Training0
LiT-4-RSVQA: Lightweight Transformer-based Visual Question Answering in Remote Sensing0
Using Visual Cropping to Enhance Fine-Detail Question Answering of BLIP-Family Models0
Show:102550
← PrevPage 23 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified