SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 801850 of 2177 papers

TitleStatusHype
As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?0
Chain-of-Spot: Interactive Reasoning Improves Large Vision-Language ModelsCode2
SpatialPIN: Enhancing Spatial Reasoning Capabilities of Vision-Language Models through Prompting and Interacting 3D Priors0
FlexCap: Describe Anything in Images in Controllable Detail0
Can LLMs Generate Human-Like Wayfinding Instructions? Towards Platform-Agnostic Embodied Instruction Synthesis0
SQ-LLaVA: Self-Questioning for Large Vision-Language AssistantCode1
Few-Shot VQA with Frozen LLMs: A Tale of Two Approaches0
Knowledge Condensation and Reasoning for Knowledge-based VQA0
Few-Shot Image Classification and Segmentation as Visual Question Answering Using Vision-Language Models0
Parameter Efficient Reinforcement Learning from Human Feedback0
Adversarial Training with OCR Modality Perturbation for Scene-Text Visual Question AnsweringCode0
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training0
VisionGPT: Vision-Language Understanding Agent Using Generalized Multimodal Framework0
Can We Talk Models Into Seeing the World Differently?Code1
Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization0
Fine-tuning Large Language Models with Sequential Instructions0
Mitigating the Impact of Attribute Editing on Face Recognition0
MoAI: Mixture of All Intelligence for Large Language and Vision ModelsCode3
Beyond Text: Frozen Large Language Models in Visual Signal ComprehensionCode2
Multi-modal Auto-regressive Modeling via Visual WordsCode1
Answering Diverse Questions via Text Attached with Key Audio-Visual CluesCode0
Mipha: A Comprehensive Overhaul of Multimodal Assistant with Small Language ModelsCode3
DeepSeek-VL: Towards Real-World Vision-Language UnderstandingCode7
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of contextCode3
SnapNTell: Enhancing Entity-Centric Visual Question Answering with Retrieval Augmented Multimodal LLM0
CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual ScenariosCode2
Are Language Models Puzzle Prodigies? Algorithmic Puzzles Unveil Serious Challenges in Multimodal ReasoningCode2
Modeling Collaborator: Enabling Subjective Vision Classification With Minimal Human Effort via LLM Tool-Use0
CLEVR-POC: Reasoning-Intensive Visual Question Answering in Partially Observable Environments0
Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language ModelsCode3
MOKA: Open-World Robotic Manipulation through Mark-Based Visual Prompting0
Enhancing Generalization in Medical Visual Question Answering Tasks via Gradient-Guided Model Perturbation0
Vision-Language Models for Medical Report Generation and Visual Question Answering: A ReviewCode3
InfiMM-HD: A Leap Forward in High-Resolution Multimodal Understanding0
The All-Seeing Project V2: Towards General Relation Comprehension of the Open WorldCode4
A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models0
ArcSin: Adaptive ranged cosine Similarity injected noise for Language-Driven Visual Tasks0
VCD: Knowledge Base Guided Visual Commonsense Discovery in Images0
Read and Think: An Efficient Step-wise Multimodal Language Model for Document Understanding and Reasoning0
LLM-Assisted Multi-Teacher Continual Learning for Visual Question Answering in Robotic SurgeryCode0
RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis0
Bridging the Gap between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQACode1
VISREAS: Complex Visual Reasoning with Unanswerable Questions0
Multimodal Transformer With a Low-Computational-Cost Guarantee0
CommVQA: Situating Visual Question Answering in Communicative ContextsCode0
Uncertainty-Aware Evaluation for Vision-Language ModelsCode1
Visual Hallucinations of Multi-modal Large Language ModelsCode1
TinyLLaVA: A Framework of Small-scale Large Multimodal ModelsCode4
Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension with Enhanced Visual Knowledge AlignmentCode1
Exploring the Frontier of Vision-Language Models: A Survey of Current Methodologies and Future Directions0
Show:102550
← PrevPage 17 of 44Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified