SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 20512100 of 2177 papers

TitleStatusHype
Towards Visual Question Answering on Pathology ImagesCode0
Active Learning for Visual Question Answering: An Empirical StudyCode0
Improved RAMEN: Towards Domain Generalization for Visual Question AnsweringCode0
Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question AnsweringCode0
RUBi: Reducing Unimodal Biases for Visual Question AnsweringCode0
RUBi: Reducing Unimodal Biases in Visual Question AnsweringCode0
Image Content Generation with Causal ReasoningCode0
Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP TrainingCode0
Zero-shot Translation of Attention Patterns in VQA Models to Natural LanguageCode0
Track the Answer: Extending TextVQA from Image to Video with Spatio-Temporal CluesCode0
Image Captioning for Effective Use of Language Models in Knowledge-Based Visual Question AnsweringCode0
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language TasksCode0
ArtQuest: Countering Hidden Language Biases in ArtVQACode0
IMAD: IMage-Augmented multi-modal DialogueCode0
Beyond Bilinear: Generalized Multimodal Factorized High-order Pooling for Visual Question AnsweringCode0
Illusory VQA: Benchmarking and Enhancing Multimodal Models on Visual IllusionsCode0
Transfer Learning via Unsupervised Task Discovery for Visual Question AnsweringCode0
Transformer Module Networks for Systematic Generalization in Visual Question AnsweringCode0
Beyond Accuracy: A Consolidated Tool for Visual Question Answering BenchmarkingCode0
BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease DiagnosisCode0
Scene Graph Prediction with Limited LabelsCode0
LLaVA Steering: Visual Instruction Tuning with 500x Fewer Parameters through Modality Linear Representation-SteeringCode0
Benchmarking Vision-Language Contrastive Methods for Medical Representation LearningCode0
Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual ReasoningCode0
ILLUME: Rationalizing Vision-Language Models through Human InteractionsCode0
Detecting Knowledge Boundary of Vision Large Language Models by Sampling-Based InferenceCode0
IIU: Independent Inference Units for Knowledge-based Visual Question AnsweringCode0
Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMsCode0
Visually Dehallucinative Instruction GenerationCode0
II-MMR: Identifying and Improving Multi-modal Multi-hop Reasoning in Visual Question AnsweringCode0
Treble Counterfactual VLMs: A Causal Approach to HallucinationCode0
Visually Grounded VQA by Lattice-based RetrievalCode0
Securing Vision-Language Models with a Robust Encoder Against Jailbreak and Adversarial AttacksCode0
Visually Interpretable Subtask Reasoning for Visual Question AnsweringCode0
Barlow constrained optimization for Visual Question AnsweringCode0
BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured DataCode0
Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-trainingCode0
HumaniBench: A Human-Centric Framework for Large Multimodal Models EvaluationCode0
HRIBench: Benchmarking Vision-Language Models for Real-Time Human Perception in Human-Robot InteractionCode0
AVQACL: A Novel Benchmark for Audio-Visual Question Answering Continual LearningCode0
TUBench: Benchmarking Large Vision-Language Models on Trustworthiness with Unanswerable QuestionsCode0
Delving Deeper into Cross-lingual Visual Question AnsweringCode0
Why do These Match? Explaining the Behavior of Image Similarity ModelsCode0
Towards Flexible Evaluation for Generative Visual Question AnsweringCode0
Analyzing the Behavior of Visual Question Answering ModelsCode0
Select, Substitute, Search: A New Benchmark for Knowledge-Augmented Visual Question AnsweringCode0
Self-Critical Reasoning for Robust Visual Question AnsweringCode0
Visual Question Answering: A Survey of Methods and DatasetsCode0
WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language ModelsCode0
How to Determine the Preferred Image Distribution of a Black-Box Vision-Language Model?Code0
Show:102550
← PrevPage 42 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified