SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 101125 of 2167 papers

TitleStatusHype
SpatialLLM: A Compound 3D-Informed Design towards Spatially-Intelligent Large Multimodal Models0
Localizing Before Answering: A Hallucination Evaluation Benchmark for Grounded Medical Multimodal LLMs0
VideoMultiAgents: A Multi-Agent Framework for Video Question AnsweringCode1
An Empirical Study on Prompt Compression for Large Language ModelsCode3
Data-Driven Calibration of Prediction Sets in Large Vision-Language Models Based on Inductive Conformal Prediction0
A Comprehensive Survey of Knowledge-Based Vision Question Answering Systems: The Lifecycle of Knowledge in Visual Reasoning Task0
NTIRE 2025 Challenge on Short-form UGC Video Quality Assessment and Enhancement: Methods and ResultsCode1
Bridging the Semantic Gaps: Improving Medical VQA Consistency with LLM-Augmented Question Sets0
Instruction-augmented Multimodal Alignment for Image-Text and Element Matching0
DVLTA-VQA: Decoupled Vision-Language Modeling with Text-Guided Adaptation for Blind Video Quality Assessment0
PuzzleBench: A Fully Dynamic Evaluation Framework for Large Multimodal Models on Puzzle Solving0
QAVA: Query-Agnostic Visual Attack to Large Vision-Language ModelsCode0
Building Trustworthy Multimodal AI: A Review of Fairness, Transparency, and Ethics in Vision-Language Tasks0
MMKB-RAG: A Multi-Modal Knowledge-Based Retrieval-Augmented Generation Framework0
FVQ: A Large-Scale Dataset and A LMM-based Method for Face Video Quality AssessmentCode0
NoTeS-Bank: Benchmarking Neural Transcription and Search for Scientific Notes Understanding0
PathVLM-R1: A Reinforcement Learning-Driven Reasoning Model for Pathology Visual-Language Tasks0
Mimic In-Context Learning for Multimodal TasksCode1
TokenFocus-VQA: Enhancing Text-to-Image Alignment with Position-Aware Focus and Multi-Perspective Aggregations on LVLMs0
UniRVQA: A Unified Framework for Retrieval-Augmented Vision Question Answering via Self-Reflective Joint Training0
Hierarchical Modeling for Medical Visual Question Answering with Cross-Attention Fusion0
QIRL: Boosting Visual Question Answering via Optimized Question-Image Relation Learning0
STING-BEE: Towards Vision-Language Model for Real-World X-ray Baggage Security InspectionCode1
SocialGesture: Delving into Multi-person Gesture Understanding0
Reasoning LLMs for User-Aware Multimodal Conversational Agents0
Show:102550
← PrevPage 5 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified