SOTAVerified

Multimodal Reasoning

Reasoning over multimodal inputs.

Papers

Showing 51100 of 302 papers

TitleStatusHype
LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction TuningCode2
Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?Code2
VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement LearningCode2
Neptune: The Long Orbit to Benchmarking Long Video UnderstandingCode2
DC3DO: Diffusion Classifier for 3D ObjectsCode1
Thinking Before Looking: Improving Multimodal LLM Reasoning via Mitigating Visual HallucinationCode1
Will Pre-Training Ever End? A First Step Toward Next-Generation Foundation MLLMs via Self-Improving Systematic CognitionCode1
Stop Reasoning! When Multimodal LLM with Chain-of-Thought Reasoning Meets Adversarial ImageCode1
Advancing Multimodal Reasoning via Reinforcement Learning with Cold StartCode1
Accelerating Multimodal Large Language Models via Dynamic Visual-Token Exit and the Empirical FindingsCode1
ARB: A Comprehensive Arabic Multimodal Reasoning BenchmarkCode1
Towers of Babel: Combining Images, Language, and 3D Geometry for Learning Multimodal VisionCode1
CoMT: A Novel Benchmark for Chain of Multi-modal Thought on Large Vision-Language ModelsCode1
CofiPara: A Coarse-to-fine Paradigm for Multimodal Sarcasm Target Identification with Large Multimodal ModelsCode1
Shifting More Attention to Visual Backbone: Query-modulated Refinement Networks for End-to-End Visual GroundingCode1
Code-Vision: Evaluating Multimodal LLMs Logic Understanding and Code Generation CapabilitiesCode1
SATORI-R1: Incentivizing Multimodal Reasoning with Spatial Grounding and Verifiable RewardsCode1
A Picture Is Worth a Graph: A Blueprint Debate Paradigm for Multimodal ReasoningCode1
Question-Aware Gaussian Experts for Audio-Visual Question AnsweringCode1
e-SNLI-VE: Corrected Visual-Textual Entailment with Natural Language ExplanationsCode1
Fine-Grained Visual EntailmentCode1
SAKURA: On the Multi-hop Reasoning of Large Audio-Language Models Based on Speech and Audio InformationCode1
Variational Causal Inference Network for Explanatory Visual Question AnsweringCode1
Oasis: One Image is All You Need for Multimodal Instruction Data SynthesisCode1
A Multimodal Framework for the Detection of Hateful MemesCode1
PACS: A Dataset for Physical Audiovisual CommonSense ReasoningCode1
Exploring the Transferability of Visual Prompting for Multimodal Large Language ModelsCode1
MORSE-500: A Programmatically Controllable Video Benchmark to Stress-Test Multimodal ReasoningCode1
Breaking the Data Barrier -- Building GUI Agents Through Task GeneralizationCode1
Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency TrainingCode1
MM-BigBench: Evaluating Multimodal Models on Multimodal Content Comprehension TasksCode1
Boosting MLLM Reasoning with Text-Debiased Hint-GRPOCode1
MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific ResearchCode1
All in an Aggregated Image for In-Image LearningCode1
MFC-Bench: Benchmarking Multimodal Fact-Checking with Large Vision-Language ModelsCode1
MM-Verify: Enhancing Multimodal Reasoning with Chain-of-Thought VerificationCode1
Math-PUMA: Progressive Upward Multimodal Alignment to Enhance Mathematical ReasoningCode1
MDK12-Bench: A Multi-Discipline Benchmark for Evaluating Reasoning in Multimodal Large Language ModelsCode1
Beneath the Surface: Unveiling Harmful Memes with Multimodal Reasoning Distilled from Large Language ModelsCode1
DOMINO: A Dual-System for Multi-step Visual Language ReasoningCode1
Agent-X: Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic TasksCode1
Enhancing Human-like Multi-Modal Reasoning: A New Challenging Dataset and Comprehensive FrameworkCode1
Do Language Models Understand Time?Code1
LogicOCR: Do Your Large Multimodal Models Excel at Logical Reasoning on Text-Rich Images?Code1
LLM-CXR: Instruction-Finetuned LLM for CXR Image Understanding and GenerationCode1
MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at ScaleCode1
3MDBench: Medical Multimodal Multi-agent Dialogue BenchmarkCode1
MERLOT: Multimodal Neural Script Knowledge ModelsCode1
Learning Compact Vision Tokens for Efficient Large Multimodal ModelsCode1
HaloQuest: A Visual Hallucination Dataset for Advancing Multimodal ReasoningCode1
Show:102550
← PrevPage 2 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GPT-4VAccuracy24Unverified
2Gemini ProAccuracy13.2Unverified
3LLaVa-1.5-13BAccuracy1.8Unverified
4LLaVa-1.5-7BAccuracy1.5Unverified
5BLIP2-FLAN-T5-XXLAccuracy0.9Unverified
6QWENAccuracy0.9Unverified
7CogVLMAccuracy0.9Unverified
8InstructBLIPAccuracy0.6Unverified
#ModelMetricClaimedVerifiedStatus
1GPT4VAccuracy22.76Unverified
2Gemini ProAccuracy17.66Unverified
3Qwen-VL-MaxAccuracy15.59Unverified
4InternLM-XComposer2-VLAccuracy14.54Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4Acc30.3Unverified