SOTAVerified

Multimodal Reasoning

Reasoning over multimodal inputs.

Papers

Showing 151200 of 302 papers

TitleStatusHype
Oasis: One Image is All You Need for Multimodal Instruction Data SynthesisCode1
LMM-R1: Empowering 3B LMMs with Strong Reasoning Abilities Through Two-Stage Rule-Based RLCode4
MM-Eureka: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement LearningCode4
Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language ModelsCode5
Can Atomic Step Decomposition Enhance the Self-structured Reasoning of Multimodal Large Models?Code2
Integrating Chain-of-Thought for Multimodal Alignment: A Study on 3D Vision-Language Learning0
R1-Zero's "Aha Moment" in Visual Reasoning on a 2B Non-SFT ModelCode4
Question-Aware Gaussian Experts for Audio-Visual Question AnsweringCode1
COSINT-Agent: A Knowledge-Driven Multimodal Agent for Chinese Open Source Intelligence0
Audio-Reasoner: Improving Reasoning Capability in Large Audio Language ModelsCode3
Shakti-VLMs: Scalable Vision-Language Models for Enterprise AI0
All-in-one: Understanding and Generation in Multimodal Reasoning with the MAIA Benchmark0
R1-Onevision:An Open-Source Multimodal Large Language Model Capable of Deep ReasoningCode4
Multimodal Inconsistency Reasoning (MMIR): A New Benchmark for Multimodal Reasoning Models0
Exploring Advanced Techniques for Visual Question Answering: A Comprehensive Comparison0
SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language ModelsCode0
MM-Verify: Enhancing Multimodal Reasoning with Chain-of-Thought VerificationCode1
CutPaste&Find: Efficient Multimodal Hallucination Detector with Visual-aid Knowledge Base0
Language Models Can See Better: Visual Contrastive Decoding For LLM Multimodal ReasoningCode0
Code-Vision: Evaluating Multimodal LLMs Logic Understanding and Code Generation CapabilitiesCode1
USER-VLM 360: Personalized Vision Language Models with User-aware Tuning for Social Human-Robot InteractionsCode0
EnigmaEval: A Benchmark of Long Multimodal Reasoning Challenges0
MME-CoT: Benchmarking Chain-of-Thought in Large Multimodal Models for Reasoning Quality, Robustness, and Efficiency0
Ask in Any Modality: A Comprehensive Survey on Multimodal Retrieval-Augmented GenerationCode3
A Generative Framework for Bidirectional Image-Report Understanding in Chest Radiography0
Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration0
Boosting Multimodal Reasoning with MCTS-Automated Structured Thinking0
Position: Empowering Time Series Reasoning with Multimodal LLMs0
The Jumping Reasoning Curve? Tracking the Evolution of Reasoning Performance in GPT-[n] and o-[n] Models on Multimodal PuzzlesCode2
Efficient Reasoning with Hidden ThinkingCode2
Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark0
DRIVINGVQA: Analyzing Visual Chain-of-Thought Reasoning of Vision Language Models in Real-World Scenarios with Driving Theory Tests0
Socratic Questioning: Learn to Self-guide Multimodal Reasoning in the WildCode0
EfficientLLaVA: Generalizable Auto-Pruning for Large Vision-language Models0
LININ: Logic Integrated Neural Inference Network for Explanatory Visual Question AnsweringCode0
Diving into Self-Evolving Training for Multimodal Reasoning0
SilVar: Speech Driven Multimodal Model for Reasoning Visual Question Answering and Object LocalizationCode0
Progressive Multimodal Reasoning via Active Retrieval0
FiVL: A Framework for Improved Vision-Language AlignmentCode0
Cracking the Code of Hallucination in LVLMs with Vision-aware Head Divergence0
Do Language Models Understand Time?Code1
CoMT: A Novel Benchmark for Chain of Multi-modal Thought on Large Vision-Language ModelsCode1
A Survey of Mathematical Reasoning in the Era of Multimodal Large Language Model: Benchmark, Method & Challenges0
Leveraging Retrieval-Augmented Tags for Large Vision-Language Understanding in Complex Scenes0
Optimizing Vision-Language Interactions Through Decoder-Only Models0
EVLM: Self-Reflective Multimodal Reasoning for Cross-Dimensional Visual Editing0
Neptune: The Long Orbit to Benchmarking Long Video UnderstandingCode2
MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at ScaleCode1
Aguvis: Unified Pure Vision Agents for Autonomous GUI InteractionCode3
Accelerating Multimodal Large Language Models via Dynamic Visual-Token Exit and the Empirical FindingsCode1
Show:102550
← PrevPage 4 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GPT-4VAccuracy24Unverified
2Gemini ProAccuracy13.2Unverified
3LLaVa-1.5-13BAccuracy1.8Unverified
4LLaVa-1.5-7BAccuracy1.5Unverified
5BLIP2-FLAN-T5-XXLAccuracy0.9Unverified
6QWENAccuracy0.9Unverified
7CogVLMAccuracy0.9Unverified
8InstructBLIPAccuracy0.6Unverified
#ModelMetricClaimedVerifiedStatus
1GPT4VAccuracy22.76Unverified
2Gemini ProAccuracy17.66Unverified
3Qwen-VL-MaxAccuracy15.59Unverified
4InternLM-XComposer2-VLAccuracy14.54Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4Acc30.3Unverified