SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 251275 of 2167 papers

TitleStatusHype
GRIT: General Robust Image Task BenchmarkCode1
Graphhopper: Multi-Hop Scene Graph Reasoning for Visual Question AnsweringCode1
GraghVQA: Language-Guided Graph Neural Networks for Graph-based Visual Question AnsweringCode1
Graph Optimal Transport for Cross-Domain AlignmentCode1
A Symmetric Dual Encoding Dense Retrieval Framework for Knowledge-Intensive Visual Question AnsweringCode1
GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question AnsweringCode1
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based LocalizationCode1
HAAR: Text-Conditioned Generative Model of 3D Strand-based Human HairstylesCode1
AssistQ: Affordance-centric Question-driven Task Completion for Egocentric AssistantCode1
GeoLLaVA-8K: Scaling Remote-Sensing Multimodal Large Language Models to 8K ResolutionCode1
Align before Fuse: Vision and Language Representation Learning with Momentum DistillationCode1
Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder TransformersCode1
Change Detection Meets Visual Question AnsweringCode1
Generative Bias for Robust Visual Question AnsweringCode1
Genixer: Empowering Multimodal Large Language Models as a Powerful Data GeneratorCode1
Going Full-TILT Boogie on Document Understanding with Text-Image-Layout TransformerCode1
HallE-Control: Controlling Object Hallucination in Large Multimodal ModelsCode1
Align and Prompt: Video-and-Language Pre-training with Entity PromptsCode1
FunQA: Towards Surprising Video ComprehensionCode1
From the Least to the Most: Building a Plug-and-Play Visual Reasoner via Data SynthesisCode1
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & HallucinationsCode1
FlowLearn: Evaluating Large Vision-Language Models on Flowchart UnderstandingCode1
FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food CultureCode1
AIM 2024 Challenge on Compressed Video Quality Assessment: Methods and ResultsCode1
Florence: A New Foundation Model for Computer VisionCode1
Show:102550
← PrevPage 11 of 87Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified