SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 251275 of 2167 papers

TitleStatusHype
NuScenes-MQA: Integrated Evaluation of Captions and QA for Autonomous Driving Datasets using Markup AnnotationsCode1
Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology VideosCode1
Language-Informed Visual Concept LearningCode1
How to Configure Good In-Context Sequence for Visual Question AnsweringCode1
Recursive Visual ProgrammingCode1
Debiasing Multimodal Models via Causal Information MinimizationCode1
How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMsCode1
Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency TrainingCode1
HIDRO-VQA: High Dynamic Range Oracle for Video Quality AssessmentCode1
A Comprehensive Evaluation of GPT-4V on Knowledge-Intensive Visual Question AnsweringCode1
InfMLLM: A Unified Framework for Visual-Language TasksCode1
GPT-4V-AD: Exploring Grounding Potential of VQA-oriented GPT-4V for Zero-shot Anomaly DetectionCode1
Language Guided Visual Question Answering: Elevate Your Multimodal Language Model Using Knowledge-Enriched PromptsCode1
Multimodal ChatGPT for Medical Applications: an Experimental Study of GPT-4VCode1
EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray ImagesCode1
3D-Aware Visual Question Answering about Parts, Poses and OcclusionsCode1
Towards Perceiving Small Visual Details in Zero-shot Visual Question Answering with Multimodal LLMsCode1
Large Language Models are Temporal and Causal Reasoners for Video Question AnsweringCode1
VLIS: Unimodal Language Models Guide Multimodal Language GenerationCode1
PaLI-3 Vision Language Models: Smaller, Faster, StrongerCode1
What If the TV Was Off? Examining Counterfactual Reasoning Abilities of Multi-modal Language ModelsCode1
Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language ModelsCode1
HallE-Control: Controlling Object Hallucination in Large Multimodal ModelsCode1
Beyond Task Performance: Evaluating and Reducing the Flaws of Large Multimodal Models with In-Context LearningCode1
Vulnerabilities in Video Quality Assessment Models: The Challenge of Adversarial AttacksCode1
Show:102550
← PrevPage 11 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified