SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 751800 of 2177 papers

TitleStatusHype
Consistency and Uncertainty: Identifying Unreliable Responses From Black-Box Vision-Language Models for Selective Visual Question Answering0
Self-Supervised Visual Preference AlignmentCode2
Find The Gap: Knowledge Base Reasoning For Visual Question Answering0
HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision0
Bridging Vision and Language Spaces with Assignment PredictionCode0
Enhancing Visual Question Answering through Question-Driven Image Captions as PromptsCode1
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMsCode0
View Selection for 3D Captioning via Diffusion RankingCode3
Ferret-v2: An Improved Baseline for Referring and Grounding with Large Language ModelsCode9
Language Models Meet Anomaly Detection for Better Interpretability and GeneralizabilityCode0
InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HDCode0
OmniFusion Technical ReportCode0
HAMMR: HierArchical MultiModal React agents for generic VQA0
Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement0
Joint Visual and Text Prompting for Improved Object-Centric Perception with Multimodal Large Language ModelsCode0
Soft-Prompting with Graph-of-Thought for Multi-modal Representation LearningCode0
BuDDIE: A Business Document Dataset for Multi-task Information Extraction0
TinyVQA: Compact Multimodal Deep Neural Network for Visual Question Answering on Resource-Constrained Devices0
Enhancing Human-Computer Interaction in Chest X-ray Analysis using Vision and Language Model with Eye Gaze Patterns0
Learning by Correction: Efficient Tuning Task for Zero-Shot Generative Vision-Language ReasoningCode0
Detect2Interact: Localizing Object Key Field in Visual Question Answering (VQA) with LLMs0
Evaluating Text-to-Visual Generation with Image-to-Text GenerationCode3
CausalChaos! Dataset for Comprehensive Causal Action Question Answering Over Longer Causal Chains Grounded in Dynamic Visual ScenesCode1
M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language ModelsCode3
Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-trainingCode0
Uncovering Bias in Large Vision-Language Models with Counterfactuals0
VHM: Versatile and Honest Vision Language Model for Remote Sensing Image AnalysisCode2
Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language ModelsCode2
JDocQA: Japanese Document Question Answering Dataset for Generative Language ModelsCode1
Multi-Frame, Lightweight & Efficient Vision-Language Models for Question Answering in Autonomous DrivingCode2
Mini-Gemini: Mining the Potential of Multi-modality Vision Language ModelsCode7
Beyond Embeddings: The Promise of Visual Table in Visual ReasoningCode1
Quantifying and Mitigating Unimodal Biases in Multimodal Large Language Models: A Causal PerspectiveCode1
Intrinsic Subgraph Generation for Interpretable Graph based Visual Question AnsweringCode0
Visual Hallucination: Definition, Quantification, and Prescriptive Remediations0
A Gaze-grounded Visual Question Answering Dataset for Clarifying Ambiguous Japanese Questions0
PropTest: Automatic Property Testing for Improved Visual Programming0
Synthesize Step-by-Step: Tools, Templates and LLMs as Data Generators for Reasoning-Based Chart VQA0
IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language ModelsCode1
Surgical-LVLM: Learning to Adapt Large Vision-Language Model for Grounded Visual Question Answering in Robotic Surgery0
MedPromptX: Grounded Multimodal Prompting for Chest X-ray DiagnosisCode2
LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal ModelsCode2
Multi-Agent VQA: Exploring Multi-Agent Foundation Models in Zero-Shot Visual Question AnsweringCode1
Language Repository for Long Video UnderstandingCode1
MyVLM: Personalizing VLMs for User-Specific Queries0
VL-Mamba: Exploring State Space Models for Multimodal Learning0
Improved Baselines for Data-efficient Perceptual Augmentation of LLMs0
HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language ModelsCode1
WoLF: Wide-scope Large Language Model Framework for CXR Understanding0
VL-ICL Bench: The Devil in the Details of Multimodal In-Context LearningCode2
Show:102550
← PrevPage 16 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified