SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 110 of 2167 papers

TitleStatusHype
VisionThink: Smart and Efficient Vision Language Model via Reinforcement LearningCode0
MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM0
Describe Anything Model for Visual Question Answering on Text-rich ImagesCode1
Evaluating Attribute Confusion in Fashion Text-to-Image Generation0
LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation0
Decoupled Seg Tokens Make Stronger Reasoning Video Segmenter and GrounderCode1
SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning0
Bridging Video Quality Scoring and Justification via Large Multimodal Models0
DrishtiKon: Multi-Granular Visual Grounding for Text-Rich Document ImagesCode0
FOCUS: Internal MLLM Representations for Efficient Fine-Grained Visual Question Answering0
Show:102550
← PrevPage 1 of 217Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Gemini Ultra (pixel only)ANLS80.3Unverified
2SMoLA-PaLI-X SpecialistANLS66.2Unverified
3ScreenAI 5B (4.62 B params, w/ OCR)ANLS65.9Unverified
4SMoLA-PaLI-X GeneralistANLS65.6Unverified
5UDOP (aux)ANLS63Unverified
6PaLI-3 (w/ OCR)ANLS62.4Unverified
7TILT-LargeANLS61.2Unverified
8PaLI-3ANLS57.8Unverified
9ChatGPT 3.5 with LAPDoc Prompt (SpatialFormat)ANLS54.9Unverified
10PaLI-X (Single-task FT w/ OCR)ANLS54.8Unverified