SOTAVerified

Multimodal Reasoning

Reasoning over multimodal inputs.

Papers

Showing 101150 of 302 papers

TitleStatusHype
SToLa: Self-Adaptive Touch-Language Framework with Tactile Commonsense Reasoning in Open-Ended Scenarios0
X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains0
Advancing Conversational Diagnostic AI with Multimodal Reasoning0
R-Bench: Graduate-level Multi-disciplinary Benchmarks for LLM & MLLM Complex Reasoning Evaluation0
Reinforced MLLM: A Survey on RL-Based Reasoning in Multimodal Large Language Models0
MultiMind: Enhancing Werewolf Agents with Multimodal Reasoning and Theory of Mind0
VideoMultiAgents: A Multi-Agent Framework for Video Question AnsweringCode1
Skywork R1V2: Multimodal Hybrid Reinforcement Learning for ReasoningCode7
VLMGuard-R1: Proactive Safety Alignment for VLMs via Reasoning-Driven Prompt Optimization0
GeoSense: Evaluating Identification and Application of Geometric Principles in Multimodal Reasoning0
Embodied-R: Collaborative Framework for Activating Embodied Spatial Reasoning in Foundation Models via Reinforcement LearningCode2
Structured Graph Representations for Visual Narrative Reasoning: A Hierarchical Framework for Comics0
SlowFastVAD: Video Anomaly Detection via Integrating Simple Detector and RAG-Enhanced Vision-Language Model0
Breaking the Data Barrier -- Building GUI Agents Through Task GeneralizationCode1
VisualPuzzles: Decoupling Multimodal Reasoning Evaluation from Domain Knowledge0
Draw with Thought: Unleashing Multimodal Reasoning for Scientific Diagram Generation0
HM-RAG: Hierarchical Multi-Agent Multimodal Retrieval Augmented GenerationCode2
NoTeS-Bank: Benchmarking Neural Transcription and Search for Scientific Notes Understanding0
VLMT: Vision-Language Multimodal Transformer for Multimodal Multi-hop Question Answering0
VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement LearningCode2
Kimi-VL Technical ReportCode5
MDK12-Bench: A Multi-Discipline Benchmark for Evaluating Reasoning in Multimodal Large Language ModelsCode1
Skywork R1V: Pioneering Multimodal Reasoning with Chain-of-ThoughtCode7
Why Reasoning Matters? A Survey of Advancements in Multimodal Reasoning (v1)0
MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models0
Affordable AI Assistants with Knowledge Graph of ThoughtsCode3
FortisAVQA and MAVEN: a Benchmark Dataset and Debiasing Framework for Robust Multimodal ReasoningCode2
Agentic Multimodal AI for Hyperpersonalized B2B and B2C Advertising in Competitive Markets: An AI-Driven Competitive Advertising Framework0
Boosting MLLM Reasoning with Text-Debiased Hint-GRPOCode1
Evolutionary Prompt Optimization Discovers Emergent Multimodal Reasoning Strategies in Vision-Language Models0
3MDBench: Medical Multimodal Multi-agent Dialogue BenchmarkCode1
VisualQuest: A Diverse Image Dataset for Evaluating Visual Recognition in LLMs0
Training-Free Personalization via Retrieval and Reasoning on Fingerprints0
Mind with Eyes: from Language Reasoning to Multimodal Reasoning0
OpenVLThinker: An Early Exploration to Complex Vision-Language Reasoning via Iterative Self-ImprovementCode2
Towards Agentic Recommender Systems in the Era of Multimodal Large Language Models0
EfficientLLaVA:Generalizable Auto-Pruning for Large Vision-language Models0
LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction TuningCode2
Mitigating Visual Forgetting via Take-along Visual Conditioning for Multi-modal Long CoT Reasoning0
MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific ResearchCode1
DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual GroundingCode2
MPBench: A Comprehensive Multimodal Reasoning Benchmark for Process Errors Identification0
Will Pre-Training Ever End? A First Step Toward Next-Generation Foundation MLLMs via Self-Improving Systematic CognitionCode1
VERIFY: A Benchmark of Visual Explanation and Reasoning for Investigating Multimodal Reasoning Fidelity0
Chat-TS: Enhancing Multi-Modal Reasoning Over Time-Series and Natural Language Data0
How Do Multimodal Large Language Models Handle Complex Multimodal Reasoning? Placing Them in An Extensible Escape GameCode1
R1-Onevision: Advancing Generalized Multimodal Reasoning through Cross-Modal FormalizationCode4
VisualPRM: An Effective Process Reward Model for Multimodal Reasoning0
MindGYM: Enhancing Vision-Language Models via Synthetic Self-Challenging QuestionsCode0
Oasis: One Image is All You Need for Multimodal Instruction Data SynthesisCode1
Show:102550
← PrevPage 3 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GPT-4VAccuracy24Unverified
2Gemini ProAccuracy13.2Unverified
3LLaVa-1.5-13BAccuracy1.8Unverified
4LLaVa-1.5-7BAccuracy1.5Unverified
5BLIP2-FLAN-T5-XXLAccuracy0.9Unverified
6QWENAccuracy0.9Unverified
7CogVLMAccuracy0.9Unverified
8InstructBLIPAccuracy0.6Unverified
#ModelMetricClaimedVerifiedStatus
1GPT4VAccuracy22.76Unverified
2Gemini ProAccuracy17.66Unverified
3Qwen-VL-MaxAccuracy15.59Unverified
4InternLM-XComposer2-VLAccuracy14.54Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4Acc30.3Unverified