SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 16011650 of 2167 papers

TitleStatusHype
Instruction-augmented Multimodal Alignment for Image-Text and Element Matching0
Integrating Frequency-Domain Representations with Low-Rank Adaptation in Vision-Language Models0
Integrating Knowledge and Reasoning in Image Understanding0
Interactive Attention AI to translate low light photos to captions for night scene understanding in women safety0
Interactive Visual Task Learning for Robots0
Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering0
Interpretable Counting for Visual Question Answering0
Interpretable Face Anti-Spoofing: Enhancing Generalization with Multimodal Large Language Models0
Interpretable Medical Image Visual Question Answering via Multi-Modal Relationship Graph Learning0
Interpretable Neural Computation for Real-World Compositional Visual Question Answering0
Interpretable Visual Question Answering Referring to Outside Knowledge0
Interpretable Visual Question Answering by Reasoning on Dependency Trees0
Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining0
Interpretable Visual Question Answering via Reasoning Supervision0
Interpretable Visual Reasoning via Probabilistic Formulation under Natural Supervision0
Inverse Visual Question Answering: A New Benchmark and VQA Diagnosis Tool0
Inverse Visual Question Answering with Multi-Level Attentions0
Investigating Biases in Textual Entailment Datasets0
Investigating layer-selective transfer learning of QAOA parameters for Max-Cut problem0
ISAAQ -- Mastering Textbook Questions with Pre-trained Transformers and Bottom-Up and Top-Down Attention0
ISAAQ - Mastering Textbook Questions with Pre-trained Transformers and Bottom-Up and Top-Down Attention0
Is Cognition consistent with Perception? Assessing and Mitigating Multimodal Knowledge Conflicts in Document Understanding0
Is GPT-3 all you need for Visual Question Answering in Cultural Heritage?0
Iterated learning for emergent systematicity in VQA0
It Takes Two to Tango: Towards Theory of AI's Mind0
iVQA: Inverse Visual Question Answering0
Jaeger: A Concatenation-Based Multi-Transformer VQA Model0
Joint Image Captioning and Question Answering0
Joint learning of object graph and relation graph for visual question answering0
Jointly Learning Truth-Conditional Denotations and Groundings using Parallel Attention0
JTD-UAV: MLLM-Enhanced Joint Tracking and Description Framework for Anti-UAV Systems0
`Just because you are right, doesn't mean I am wrong': Overcoming a bottleneck in development and evaluation of Open-Ended VQA tasks0
KAT: A Knowledge Augmented Transformer for Vision-and-Language0
Kernel Pooling for Convolutional Neural Networks0
Generating and Evaluating Explanations of Attended and Error-Inducing Input Regions for VQA Models0
Knowing Where to Look? Analysis on Attention of Visual Question Answering System0
KnowIT VQA: Answering Knowledge-Based Questions about Videos0
Knowledge Acquisition for Visual Question Answering via Iterative Querying0
Knowledge-Based Counterfactual Queries for Visual Question Answering0
Knowledge-Based Visual Question Answering in Videos0
Knowledge Condensation and Reasoning for Knowledge-based VQA0
Knowledge Detection by Relevant Question and Image Attributes in Visual Question Answering0
KNVQA: A Benchmark for evaluation knowledge-based VQA0
KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain Knowledge-Based VQA0
KVL-BERT: Knowledge Enhanced Visual-and-Linguistic BERT for Visual Commonsense Reasoning0
KVQA: Knowledge-Aware Visual Question Answering0
Language bias in Visual Question Answering: A Survey and Taxonomy0
Language Features Matter: Effective Language Representations for Vision-Language Tasks0
Language Models are General-Purpose Interfaces0
LAPDoc: Layout-Aware Prompting for Documents0
Show:102550
← PrevPage 33 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified