SOTAVerified

Multimodal Reasoning

Reasoning over multimodal inputs.

Papers

Showing 101125 of 302 papers

TitleStatusHype
VideoMultiAgents: A Multi-Agent Framework for Video Question AnsweringCode1
Visual Abstract Thinking Empowers Multimodal ReasoningCode1
Towers of Babel: Combining Images, Language, and 3D Geometry for Learning Multimodal VisionCode1
Variational Causal Inference Network for Explanatory Visual Question AnsweringCode1
MDK12-Bench: A Multi-Discipline Benchmark for Evaluating Reasoning in Multimodal Large Language ModelsCode1
Fine-Grained Visual EntailmentCode1
Learning Compact Vision Tokens for Efficient Large Multimodal ModelsCode1
Will Pre-Training Ever End? A First Step Toward Next-Generation Foundation MLLMs via Self-Improving Systematic CognitionCode1
Socratic Models: Composing Zero-Shot Multimodal Reasoning with LanguageCode0
Socratic Questioning: Learn to Self-guide Multimodal Reasoning in the WildCode0
SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language ModelsCode0
GThinker: Towards General Multimodal Reasoning via Cue-Guided RethinkingCode0
SilVar: Speech Driven Multimodal Model for Reasoning Visual Question Answering and Object LocalizationCode0
Controllable Contextualized Image Captioning: Directing the Visual Narrative through User-Defined HighlightsCode0
FiVL: A Framework for Improved Vision-Language AlignmentCode0
Apollo: Zero-shot MultiModal Reasoning with Multiple ExpertsCode0
APO: Enhancing Reasoning Ability of MLLMs via Asymmetric Policy OptimizationCode0
MM-MATH: Advancing Multimodal Math Evaluation with Process Evaluation and Fine-grained ClassificationCode0
On the generalization capacity of neural networks during generic multimodal reasoningCode0
Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference OptimizationCode0
MM-R5: MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document RetrievalCode0
MindGYM: Enhancing Vision-Language Models via Synthetic Self-Challenging QuestionsCode0
MMBoundary: Advancing MLLM Knowledge Boundary Awareness through Reasoning Step Confidence CalibrationCode0
Dual Attention Networks for Multimodal Reasoning and MatchingCode0
Do Vision-Language Pretrained Models Learn Composable Primitive Concepts?Code0
Show:102550
← PrevPage 5 of 13Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GPT-4VAccuracy24Unverified
2Gemini ProAccuracy13.2Unverified
3LLaVa-1.5-13BAccuracy1.8Unverified
4LLaVa-1.5-7BAccuracy1.5Unverified
5BLIP2-FLAN-T5-XXLAccuracy0.9Unverified
6QWENAccuracy0.9Unverified
7CogVLMAccuracy0.9Unverified
8InstructBLIPAccuracy0.6Unverified
#ModelMetricClaimedVerifiedStatus
1GPT4VAccuracy22.76Unverified
2Gemini ProAccuracy17.66Unverified
3Qwen-VL-MaxAccuracy15.59Unverified
4InternLM-XComposer2-VLAccuracy14.54Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4Acc30.3Unverified