SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 551600 of 2167 papers

TitleStatusHype
Enhanced Textual Feature Extraction for Visual Question Answering: A Simple Convolutional Approach0
Visual Fact Checker: Enabling High-Fidelity Detailed Caption Generation0
Multi-Page Document Visual Question Answering using Self-Attention Scoring MechanismCode0
ViOCRVQA: Novel Benchmark Dataset and Vision Reader for Visual Question Answering by Understanding Vietnamese Text in ImagesCode1
NTIRE 2024 Quality Assessment of AI-Generated Content Challenge0
RadGenome-Chest CT: A Grounded Vision-Language Dataset for Chest CT AnalysisCode0
AIS 2024 Challenge on Video Quality Assessment of User-Generated Content: Methods and ResultsCode0
Fusion of Domain-Adapted Vision and Language Models for Medical Visual Question Answering0
MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-MakingCode3
Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question AnsweringCode0
Exploring Diverse Methods in Visual Question Answering0
TextSquare: Scaling up Text-Centric Visual Instruction Tuning0
PDF-MVQA: A Dataset for Multimodal Information Retrieval in PDF-based Visual Question Answering0
Unified Scene Representation and Reconstruction for 3D Large Language Models0
Look Before You Decide: Prompting Active Deduction of MLLMs for Assumptive Reasoning0
LaPA: Latent Prompt Assist Model For Medical Visual Question AnsweringCode1
MedThink: Explaining Medical Visual Question Answering via Multimodal Decision-Making Rationale0
Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models0
NTIRE 2024 Challenge on Short-form UGC Video Quality Assessment: Methods and ResultsCode2
Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language ModelsCode2
ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in ImagesCode0
Find The Gap: Knowledge Base Reasoning For Visual Question Answering0
TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image UnderstandingCode1
Enhancing Visual Question Answering through Question-Driven Image Captions as PromptsCode1
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMsCode0
BRAVE: Broadening the visual encoding of vision-language models0
OmniFusion Technical ReportCode0
MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video UnderstandingCode3
HAMMR: HierArchical MultiModal React agents for generic VQA0
Study of the effect of Sharpness on Blind Video Quality Assessment0
Joint Visual and Text Prompting for Improved Object-Centric Perception with Multimodal Large Language ModelsCode0
BuDDIE: A Business Document Dataset for Multi-task Information Extraction0
TinyVQA: Compact Multimodal Deep Neural Network for Visual Question Answering on Resource-Constrained Devices0
Detect2Interact: Localizing Object Key Field in Visual Question Answering (VQA) with LLMs0
Evaluating Text-to-Visual Generation with Image-to-Text GenerationCode3
Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-trainingCode0
Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language ModelsCode2
JDocQA: Japanese Document Question Answering Dataset for Generative Language ModelsCode1
Quantifying and Mitigating Unimodal Biases in Multimodal Large Language Models: A Causal PerspectiveCode1
Visual Hallucination: Definition, Quantification, and Prescriptive Remediations0
A Gaze-grounded Visual Question Answering Dataset for Clarifying Ambiguous Japanese Questions0
Intrinsic Subgraph Generation for Interpretable Graph based Visual Question AnsweringCode0
Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought ReasoningCode3
Synthesize Step-by-Step: Tools, Templates and LLMs as Data Generators for Reasoning-Based Chart VQA0
IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language ModelsCode1
Surgical-LVLM: Learning to Adapt Large Vision-Language Model for Grounded Visual Question Answering in Robotic Surgery0
MedPromptX: Grounded Multimodal Prompting for Chest X-ray DiagnosisCode2
Multi-Agent VQA: Exploring Multi-Agent Foundation Models in Zero-Shot Visual Question AnsweringCode1
Multi-Modal Hallucination Control by Visual Information Grounding0
vid-TLDR: Training Free Token merging for Light-weight Video TransformerCode2
Show:102550
← PrevPage 12 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9InternVL-CAccuracy81.2Unverified
10LyricsAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified