SOTAVerified

Visual Question Answering (VQA)

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Papers

Showing 351400 of 2167 papers

TitleStatusHype
SemiHVision: Enhancing Medical Multimodal Models with a Semi-Human Annotated Dataset and Fine-Tuned Instruction GenerationCode0
ChitroJera: A Regionally Relevant Visual Question Answering Dataset for Bangla0
LLaVA-Ultra: Large Chinese Language and Vision Assistant for Ultrasound0
ViConsFormer: Constituting Meaningful Phrases of Scene Texts using Transformer-based Method in Vietnamese Text-based Visual Question AnsweringCode0
NaturalBench: Evaluating Vision-Language Models on Natural Adversarial Samples0
Latent Image and Video Resolution Prediction using Convolutional Neural Networks0
RescueADI: Adaptive Disaster Interpretation in Remote Sensing Images with Autonomous Agents0
ActionCOMET: A Zero-shot Approach to Learn Image-specific Commonsense Concepts about ActionsCode0
Help Me Identify: Is an LLM+VQA System All We Need to Identify Visual Concepts?Code0
MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language ModelsCode3
WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global CuisinesCode1
VividMed: Vision Language Model with Versatile Visual Grounding for MedicineCode1
SlideChat: A Large Vision-Language Assistant for Whole-Slide Pathology Image Understanding0
Difficult Task Yes but Simple Task No: Unveiling the Laziness in Multimodal LLMsCode0
Towards Foundation Models for 3D Vision: How Close Are We?Code1
Eliminating the Language Bias for Visual Question Answering with fine-grained Causal Intervention0
LiveXiv -- A Multi-Modal Live Benchmark Based on Arxiv Papers ContentCode1
Skipping Computations in Multimodal LLMsCode1
Declarative Knowledge Distillation from Large Language Models for Visual Question Answering DatasetsCode0
ViT3D Alignment of LLaMA3: 3D Medical Image Report Generation0
Quality Prediction of AI Generated Images and Videos: Emerging Trends and Opportunities0
Secure Video Quality Assessment Resisting Adversarial Attacks0
Beyond Captioning: Task-Specific Prompting for Improved VLM Performance in Mathematical Reasoning0
ERVQA: A Dataset to Benchmark the Readiness of Large Vision Language Models in Hospital EnvironmentsCode0
DataEnvGym: Data Generation Agents in Teacher Environments with Student FeedbackCode1
ActiView: Evaluating Active Perception Ability for Multimodal Large Language ModelsCode1
MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM IntegrationCode1
TUBench: Benchmarking Large Vision-Language Models on Trustworthiness with Unanswerable QuestionsCode0
BadCM: Invisible Backdoor Attack Against Cross-Modal LearningCode1
LoGra-Med: Long Context Multi-Graph Alignment for Medical Vision-Language Model0
Video Instruction Tuning With Synthetic Data0
Why context matters in VQA and Reasoning: Semantic interventions for VLM input modalities0
Backdooring Vision-Language Models with Out-Of-Distribution Data0
Unleashing the Potentials of Likelihood Composition for Multi-modal Language ModelsCode0
BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured DataCode0
FMBench: Benchmarking Fairness in Multimodal Large Language Models on Medical Tasks0
A Hitchhikers Guide to Fine-Grained Face Forgery Detection Using Common Sense ReasoningCode1
T2Vs Meet VLMs: A Scalable Multimodal Dataset for Visual Harmfulness RecognitionCode1
Visual Question Decomposition on Multimodal Large Language Models0
TrojVLM: Backdoor Attack Against Vision Language Models0
3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models0
Charting the Future: Using Chart Question-Answering for Scalable Evaluation of LLM-Driven Data Visualizations0
DARE: Diverse Visual Question Answering with Robustness Evaluation0
ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue0
A Unified Hallucination Mitigation Framework for Large Vision-Language ModelsCode0
MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical foundation modelsCode1
Advancing Video Quality Assessment for AIGC0
Revisiting Video Quality Assessment from the Perspective of GeneralizationCode0
Detect, Describe, Discriminate: Moving Beyond VQA for MLLM Evaluation0
Towards Efficient and Robust VQA-NLE Data Generation with Large Vision-Language ModelsCode0
Show:102550
← PrevPage 8 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1humanAccuracy89.3Unverified
2DREAM+Unicoder-VL (MSRA)Accuracy76.04Unverified
3TRRNet (Ensemble)Accuracy74.03Unverified
4MIL-nbgaoAccuracy73.81Unverified
5Kakao BrainAccuracy73.33Unverified
6Coarse-to-Fine Reasoning, Single ModelAccuracy72.14Unverified
7270Accuracy70.23Unverified
8NSM ensemble (updated)Accuracy67.55Unverified
9VinVL-DPTAccuracy64.92Unverified
10VinVL+LAccuracy64.85Unverified
#ModelMetricClaimedVerifiedStatus
1PaLIAccuracy84.3Unverified
2BEiT-3Accuracy84.19Unverified
3VLMoAccuracy82.78Unverified
4ONE-PEACEAccuracy82.6Unverified
5mPLUG (Huge)Accuracy82.43Unverified
6CuMo-7BAccuracy82.2Unverified
7X2-VLM (large)Accuracy81.9Unverified
8MMUAccuracy81.26Unverified
9LyricsAccuracy81.2Unverified
10InternVL-CAccuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1BEiT-3overall84.03Unverified
2mPLUG-Hugeoverall83.62Unverified
3ONE-PEACEoverall82.52Unverified
4X2-VLM (large)overall81.8Unverified
5VLMooverall81.3Unverified
6SimVLMoverall80.34Unverified
7X2-VLM (base)overall80.2Unverified
8VASToverall80.19Unverified
9VALORoverall78.62Unverified
10Prompt Tuningoverall78.53Unverified