SOTAVerified

Multimodal Reasoning

Reasoning over multimodal inputs.

Papers

Showing 191200 of 302 papers

TitleStatusHype
Do Language Models Understand Time?Code1
CoMT: A Novel Benchmark for Chain of Multi-modal Thought on Large Vision-Language ModelsCode1
A Survey of Mathematical Reasoning in the Era of Multimodal Large Language Model: Benchmark, Method & Challenges0
Leveraging Retrieval-Augmented Tags for Large Vision-Language Understanding in Complex Scenes0
Optimizing Vision-Language Interactions Through Decoder-Only Models0
EVLM: Self-Reflective Multimodal Reasoning for Cross-Dimensional Visual Editing0
Neptune: The Long Orbit to Benchmarking Long Video UnderstandingCode2
MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at ScaleCode1
Aguvis: Unified Pure Vision Agents for Autonomous GUI InteractionCode3
Accelerating Multimodal Large Language Models via Dynamic Visual-Token Exit and the Empirical FindingsCode1
Show:102550
← PrevPage 20 of 31Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GPT-4VAccuracy24Unverified
2Gemini ProAccuracy13.2Unverified
3LLaVa-1.5-13BAccuracy1.8Unverified
4LLaVa-1.5-7BAccuracy1.5Unverified
5BLIP2-FLAN-T5-XXLAccuracy0.9Unverified
6QWENAccuracy0.9Unverified
7CogVLMAccuracy0.9Unverified
8InstructBLIPAccuracy0.6Unverified
#ModelMetricClaimedVerifiedStatus
1GPT4VAccuracy22.76Unverified
2Gemini ProAccuracy17.66Unverified
3Qwen-VL-MaxAccuracy15.59Unverified
4InternLM-XComposer2-VLAccuracy14.54Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4Acc30.3Unverified