SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 351400 of 2177 papers

TitleStatusHype
Privacy-Aware Document Visual Question AnsweringCode1
VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and GenerationCode1
ViLA: Efficient Video-Language Alignment for Video Question AnsweringCode1
Hallucination Augmented Contrastive Learning for Multimodal Large Language ModelCode1
Genixer: Empowering Multimodal Large Language Models as a Powerful Data GeneratorCode1
NuScenes-MQA: Integrated Evaluation of Captions and QA for Autonomous Driving Datasets using Markup AnnotationsCode1
Language-Informed Visual Concept LearningCode1
BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal ModelsCode1
Good Questions Help Zero-Shot Image ReasoningCode1
How to Configure Good In-Context Sequence for Visual Question AnsweringCode1
Recursive Visual ProgrammingCode1
Emergent Open-Vocabulary Semantic Segmentation from Off-the-shelf Vision-Language ModelsCode1
EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Language ModelsCode1
A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question AnsweringCode1
Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided RevisionCode1
InfMLLM: A Unified Framework for Visual-Language TasksCode1
GENOME: GenerativE Neuro-symbOlic visual reasoning by growing and reusing ModulEsCode1
GPT-4V-AD: Exploring Grounding Potential of VQA-oriented GPT-4V for Zero-shot Anomaly DetectionCode1
Language Guided Visual Question Answering: Elevate Your Multimodal Language Model Using Knowledge-Enriched PromptsCode1
Making Large Language Models Better Data CreatorsCode1
Multimodal ChatGPT for Medical Applications: an Experimental Study of GPT-4VCode1
EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray ImagesCode1
3D-Aware Visual Question Answering about Parts, Poses and OcclusionsCode1
AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image DetectorsCode1
Towards Perceiving Small Visual Details in Zero-shot Visual Question Answering with Multimodal LLMsCode1
Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language ModelsCode1
VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language ModelsCode1
Toloka Visual Question Answering BenchmarkCode1
TextBind: Multi-turn Interleaved Multimodal Instruction-following in the WildCode1
A Survey on Interpretable Cross-modal ReasoningCode1
UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and MemoryCode1
Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIPCode1
InstructionGPT-4: A 200-Instruction Paradigm for Fine-Tuning MiniGPT-4Code1
StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized Image-Dialogue DataCode1
Uni-NLX: Unifying Textual Explanations for Vision and Vision-Language TasksCode1
Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme DetectionCode1
Detecting and Preventing Hallucinations in Large Vision Language ModelsCode1
Foundation Model is Efficient Multimodal Multitask Model SelectorCode1
Progressive Spatio-temporal Perception for Audio-Visual Question AnsweringCode1
SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific GraphsCode1
Expert Knowledge-Aware Image Difference Graph Representation Learning for Difference-Aware Medical Visual Question AnsweringCode1
Explaining Autonomous Driving Actions with Visual Question AnsweringCode1
CAT-ViL: Co-Attention Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic SurgeryCode1
Rad-ReStruct: A Novel VQA Benchmark and Method for Structured Radiology ReportingCode1
Self-Adaptive Sampling for Efficient Video Question-Answering on Image--Text ModelsCode1
Localized Questions in Medical Visual Question AnsweringCode1
Multimodal Prompt Retrieval for Generative Visual Question AnsweringCode1
Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question AnsweringCode1
Kosmos-2: Grounding Multimodal Large Language Models to the WorldCode1
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question AnsweringCode1
Show:102550
← PrevPage 8 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified