SOTAVerified

Multimodal Reasoning

Reasoning over multimodal inputs.

Papers

Showing 176200 of 302 papers

TitleStatusHype
Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration0
Boosting Multimodal Reasoning with MCTS-Automated Structured Thinking0
Position: Empowering Time Series Reasoning with Multimodal LLMs0
The Jumping Reasoning Curve? Tracking the Evolution of Reasoning Performance in GPT-[n] and o-[n] Models on Multimodal PuzzlesCode2
Efficient Reasoning with Hidden ThinkingCode2
Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark0
DRIVINGVQA: Analyzing Visual Chain-of-Thought Reasoning of Vision Language Models in Real-World Scenarios with Driving Theory Tests0
Socratic Questioning: Learn to Self-guide Multimodal Reasoning in the WildCode0
EfficientLLaVA: Generalizable Auto-Pruning for Large Vision-language Models0
LININ: Logic Integrated Neural Inference Network for Explanatory Visual Question AnsweringCode0
Diving into Self-Evolving Training for Multimodal Reasoning0
SilVar: Speech Driven Multimodal Model for Reasoning Visual Question Answering and Object LocalizationCode0
Progressive Multimodal Reasoning via Active Retrieval0
FiVL: A Framework for Improved Vision-Language AlignmentCode0
Cracking the Code of Hallucination in LVLMs with Vision-aware Head Divergence0
Do Language Models Understand Time?Code1
CoMT: A Novel Benchmark for Chain of Multi-modal Thought on Large Vision-Language ModelsCode1
A Survey of Mathematical Reasoning in the Era of Multimodal Large Language Model: Benchmark, Method & Challenges0
Leveraging Retrieval-Augmented Tags for Large Vision-Language Understanding in Complex Scenes0
Optimizing Vision-Language Interactions Through Decoder-Only Models0
EVLM: Self-Reflective Multimodal Reasoning for Cross-Dimensional Visual Editing0
Neptune: The Long Orbit to Benchmarking Long Video UnderstandingCode2
MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at ScaleCode1
Aguvis: Unified Pure Vision Agents for Autonomous GUI InteractionCode3
Accelerating Multimodal Large Language Models via Dynamic Visual-Token Exit and the Empirical FindingsCode1
Show:102550
← PrevPage 8 of 13Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GPT-4VAccuracy24Unverified
2Gemini ProAccuracy13.2Unverified
3LLaVa-1.5-13BAccuracy1.8Unverified
4LLaVa-1.5-7BAccuracy1.5Unverified
5BLIP2-FLAN-T5-XXLAccuracy0.9Unverified
6QWENAccuracy0.9Unverified
7CogVLMAccuracy0.9Unverified
8InstructBLIPAccuracy0.6Unverified
#ModelMetricClaimedVerifiedStatus
1GPT4VAccuracy22.76Unverified
2Gemini ProAccuracy17.66Unverified
3Qwen-VL-MaxAccuracy15.59Unverified
4InternLM-XComposer2-VLAccuracy14.54Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4Acc30.3Unverified