SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 17011750 of 2177 papers

TitleStatusHype
Why Does the VQA Model Answer No?: Improving Reasoning through Visual and Linguistic Inference0
Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs0
WoLF: Wide-scope Large Language Model Framework for CXR Understanding0
xGQA: Cross-Lingual Visual Question Answering0
Yin and Yang: Balancing and Answering Binary Visual Questions0
YouMakeup: A Large-Scale Domain-Specific Multimodal Dataset for Fine-Grained Semantic Comprehension0
ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue0
Zero-shot Action Localization via the Confidence of Large Vision-Language Models0
Zero-Shot Anomaly Detection in Battery Thermal Images Using Visual Question Answering with Prior Knowledge0
Zero-Shot Transfer VQA Dataset0
Zero-Shot Visual Question Answering0
Zero-Shot Visual Reasoning by Vision-Language Models: Benchmarking and Analysis0
Ziya-Visual: Bilingual Large Vision-Language Model via Multi-Task Instruction Tuning0
Natural Language Understanding and Inference with MLLM in Visual Question Answering: A Survey0
Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving0
Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models0
Neglected Risks: The Disturbing Reality of Children's Images in Datasets and the Urgent Call for Accountability0
NegVQA: Can Vision Language Models Understand Negation?0
Neural Attention Models for Sequence Classification: Analysis and Application to Key Term Extraction and Dialogue Act Detection0
Neural Memory Plasticity for Anomaly Detection0
Neural Self Talk: Image Understanding via Continuous Questioning and Answering0
NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA0
Neuro-Symbolic Spatio-Temporal Reasoning0
Neuro-Symbolic Visual Reasoning: Disentangling "Visual" from "Reasoning"0
Neuro-Symbolic VQA: A review from the perspective of AGI desiderata0
NEVLP: Noise-Robust Framework for Efficient Vision-Language Pre-training0
New Ideas and Trends in Deep Multimodal Content Understanding: A Review0
NEWSKVQA: Knowledge-Aware News Video Question Answering0
NMT-Keras: a Very Flexible Toolkit with a Focus on Interactive NMT and Online Learning0
Non-monotonic Logical Reasoning Guiding Deep Learning for Explainable Visual Question Answering0
Normalized and Geometry-Aware Self-Attention Network for Image Captioning0
NoTeS-Bank: Benchmarking Neural Transcription and Search for Scientific Notes Understanding0
Not-So-CLEVR: Visual Relations Strain Feedforward Neural Networks0
Object-based reasoning in VQA0
Object-Centric Diagnosis of Visual Reasoning0
Object-Centric Temporal Consistency via Conditional Autoregressive Inductive Biases0
OccLLaMA: An Occupancy-Language-Action Generative World Model for Autonomous Driving0
OMCAT: Omni Context Aware Transformer0
OMGM: Orchestrate Multiple Granularities and Modalities for Efficient Multimodal Retrieval0
On Advances in Text Generation from Images Beyond Captioning: A Case Study in Self-Rationalization0
OneEncoder: A Lightweight Framework for Progressive Alignment of Modalities0
One VLM to Keep it Learning: Generation and Balancing for Data-free Continual Visual Question Answering0
On Incorporating Semantic Prior Knowlegde in Deep Learning Through Embedding-Space Constraints0
On Incorporating Semantic Prior Knowledge in Deep Learning Through Embedding-Space Constraints0
On the Cognition of Visual Question Answering Models and Human Intelligence: A Comparative Study0
On the Effects of Video Grounding on Language Models0
On the Efficacy of Co-Attention Transformer Layers in Visual Question Answering0
On the Flip Side: Identifying Counterexamples in Visual Question Answering0
On the General Value of Evidence, and Bilingual Scene-Text Visual Question Answering0
On the Limitations of Vision-Language Models in Understanding Image Transforms0
Show:102550
← PrevPage 35 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified