SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 251300 of 2167 papers

TitleStatusHype
ErgoChat: a Visual Query System for the Ergonomic Risk Assessment of Construction Workers0
FineVQ: Fine-Grained User Generated Content Video Quality Assessment0
Multi-Agents Based on Large Language Models for Knowledge-based Visual Question Answering0
TextMatch: Enhancing Image-Text Consistency Through Multimodal Optimization0
EvalMuse-40K: A Reliable and Fine-Grained Benchmark with Comprehensive Human Annotations for Text-to-Image Generation Model EvaluationCode2
LININ: Logic Integrated Neural Inference Network for Explanatory Visual Question AnsweringCode0
HAUR: Human Annotation Understanding and Recognition Through Text-Heavy Images0
Cross-Lingual Text-Rich Visual Comprehension: An Information Theory PerspectiveCode0
Prompting Large Language Models with Rationale Heuristics for Knowledge-based Visual Question Answering0
Application of Multimodal Large Language Models in Autonomous Driving0
Toward Robust Hyper-Detailed Image Captioning: A Multiagent Approach and Dual Evaluation Metrics for Factuality and Coverage0
NeSyCoCo: A Neuro-Symbolic Concept Composer for Compositional GeneralizationCode0
InstructOCR: Instruction Boosting Scene Text SpottingCode0
Multimodal Hypothetical Summary for Retrieval-based Multi-image Question AnsweringCode0
OnlineVPO: Align Video Diffusion Model with Online Video-Centric Preference Optimization0
MedCoT: Medical Chain of Thought via Hierarchical ExpertCode1
What makes a good metric? Evaluating automatic metrics for text-to-image consistency0
Optimizing Vision-Language Interactions Through Decoder-Only Models0
Selective State Space Memory for Large Vision-Language Models0
VLR-Bench: Multilingual Benchmark Dataset for Vision-Language Retrieval Augmented Generation0
Towards a Multimodal Large Language Model with Pixel-Level Insight for BiomedicineCode2
Lyra: An Efficient and Speech-Centric Framework for Omni-CognitionCode3
Fast Prompt Alignment for Text-to-Image GenerationCode1
Illusory VQA: Benchmarking and Enhancing Multimodal Models on Visual IllusionsCode0
Can We Generate Visual Programs Without Prompting LLMs?0
IMPACT: A Large-scale Integrated Multimodal Patent Analysis and Creation Dataset for Design PatentsCode1
MMedPO: Aligning Medical Vision-Language Models with Clinical-Aware Multimodal Preference OptimizationCode2
Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling0
MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at ScaleCode1
Verb Mirage: Unveiling and Assessing Verb Concept Hallucinations in Multimodal Large Language Models0
Florence-VL: Enhancing Vision-Language Models with Generative Vision Encoder and Depth-Breadth FusionCode3
T2I-FactualBench: Benchmarking the Factuality of Text-to-Image Models with Knowledge-Intensive Concepts0
Video Quality Assessment: A Comprehensive SurveyCode2
AdvDreamer Unveils: Are Vision-Language Models Truly Ready for Real-World 3D Variations?0
WSI-LLaVA: A Multimodal Large Language Model for Whole Slide Image0
Copy-Move Forgery Detection and Question Answering for Remote Sensing ImageCode0
CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs0
DLaVA: Document Language and Vision Assistant for Answer Localization with Enhanced Interpretability and TrustworthinessCode0
SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA TasksCode0
Perception Test 2024: Challenge Summary and a Novel Hour-Long VideoQA Benchmark0
Sparse Attention Vectors: Generative Multimodal Model Features Are Discriminative Vision-Language Classifiers0
ElectroVizQA: How well do Multi-modal LLMs perform in Electronics Visual Question Answering?0
Path-RAG: Knowledge-Guided Key Region Retrieval for Open-ended Pathology Visual Question AnsweringCode2
Grounding-IQA: Multimodal Language Grounding Model for Image Quality AssessmentCode2
AIGV-Assessor: Benchmarking and Evaluating the Perceptual Quality of Text-to-Video Generation with LMMCode1
Task Progressive Curriculum Learning for Robust Visual Question Answering0
Natural Language Understanding and Inference with MLLM in Visual Question Answering: A Survey0
GEMeX: A Large-Scale, Groundable, and Explainable Medical VQA Benchmark for Chest X-ray Diagnosis0
Video-Text Dataset Construction from Multi-AI Feedback: Promoting Weak-to-Strong Preference Learning for Video Large Language Models0
ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image ExplorationCode2
Show:102550
← PrevPage 6 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9InternVL-CAccuracy81.2Unverified
10LyricsAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified