SOTAVerified

Multimodal Reasoning

Reasoning over multimodal inputs.

Papers

Showing 101150 of 302 papers

TitleStatusHype
Learning Compact Vision Tokens for Efficient Large Multimodal ModelsCode1
Metis-RISE: RL Incentivizes and SFT Enhances Multimodal Reasoning Model LearningCode1
MFC-Bench: Benchmarking Multimodal Fact-Checking with Large Vision-Language ModelsCode1
MDK12-Bench: A Multi-Discipline Benchmark for Evaluating Reasoning in Multimodal Large Language ModelsCode1
Math-PUMA: Progressive Upward Multimodal Alignment to Enhance Mathematical ReasoningCode1
Fine-Grained Visual EntailmentCode1
MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at ScaleCode1
MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific ResearchCode1
GThinker: Towards General Multimodal Reasoning via Cue-Guided RethinkingCode0
Controllable Contextualized Image Captioning: Directing the Visual Narrative through User-Defined HighlightsCode0
VEglue: Testing Visual Entailment Systems via Object-Aligned Joint ErasingCode0
Visual Goal-Step Inference using wikiHowCode0
UniT: Multimodal Multitask Learning with a Unified TransformerCode0
FiVL: A Framework for Improved Vision-Language AlignmentCode0
Apollo: Zero-shot MultiModal Reasoning with Multiple ExpertsCode0
Understanding the Role of LLMs in Multimodal Evaluation BenchmarksCode0
APO: Enhancing Reasoning Ability of MLLMs via Asymmetric Policy OptimizationCode0
MM-MATH: Advancing Multimodal Math Evaluation with Process Evaluation and Fine-grained ClassificationCode0
USER-VLM 360: Personalized Vision Language Models with User-aware Tuning for Social Human-Robot InteractionsCode0
Towards a Unified Multimodal Reasoning FrameworkCode0
Towards Low-Resource Harmful Meme Detection with LMM AgentsCode0
Socratic Models: Composing Zero-Shot Multimodal Reasoning with LanguageCode0
Socratic Questioning: Learn to Self-guide Multimodal Reasoning in the WildCode0
Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference OptimizationCode0
SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language ModelsCode0
SilVar: Speech Driven Multimodal Model for Reasoning Visual Question Answering and Object LocalizationCode0
Dual Attention Networks for Multimodal Reasoning and MatchingCode0
Do Vision-Language Pretrained Models Learn Composable Primitive Concepts?Code0
Do Vision-and-Language Transformers Learn Grounded Predicate-Noun Dependencies?Code0
Don't Buy it! Reassessing the Ad Understanding Abilities of Contrastive Multimodal ModelsCode0
On the generalization capacity of neural networks during generic multimodal reasoningCode0
M4U: Evaluating Multilingual Understanding and Reasoning for Large Multimodal ModelsCode0
DMRM: A Dual-channel Multi-hop Reasoning Model for Visual DialogCode0
Modal-specific Pseudo Query Generation for Video Corpus Moment RetrievalCode0
LININ: Logic Integrated Neural Inference Network for Explanatory Visual Question AnsweringCode0
LENS: Multi-level Evaluation of Multimodal Reasoning with Large Language ModelsCode0
MindGYM: Enhancing Vision-Language Models via Synthetic Self-Challenging QuestionsCode0
Language Models Can See Better: Visual Contrastive Decoding For LLM Multimodal ReasoningCode0
MMBoundary: Advancing MLLM Knowledge Boundary Awareness through Reasoning Step Confidence CalibrationCode0
MM-R5: MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document RetrievalCode0
Measuring Vision-Language STEM Skills of Neural ModelsCode0
KGAlign: Joint Semantic-Structural Knowledge Encoding for Multimodal Fake News DetectionCode0
JourneyBench: A Challenging One-Stop Vision-Language Understanding Benchmark of Generated ImagesCode0
Infi-Med: Low-Resource Medical MLLMs with Robust Reasoning Evaluation0
Incentivizing Multimodal Reasoning in Large Models for Direct Robot Manipulation0
Improving Pre-trained Vision-and-Language Embeddings for Phrase Grounding0
Improving Multi-Agent Debate with Sparse Communication Topology0
CutPaste&Find: Efficient Multimodal Hallucination Detector with Visual-aid Knowledge Base0
Image-of-Thought Prompting for Visual Reasoning Refinement in Multimodal Large Language Models0
Critique Before Thinking: Mitigating Hallucination through Rationale-Augmented Instruction Tuning0
Show:102550
← PrevPage 3 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GPT-4VAccuracy24Unverified
2Gemini ProAccuracy13.2Unverified
3LLaVa-1.5-13BAccuracy1.8Unverified
4LLaVa-1.5-7BAccuracy1.5Unverified
5BLIP2-FLAN-T5-XXLAccuracy0.9Unverified
6QWENAccuracy0.9Unverified
7CogVLMAccuracy0.9Unverified
8InstructBLIPAccuracy0.6Unverified
#ModelMetricClaimedVerifiedStatus
1GPT4VAccuracy22.76Unverified
2Gemini ProAccuracy17.66Unverified
3Qwen-VL-MaxAccuracy15.59Unverified
4InternLM-XComposer2-VLAccuracy14.54Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4Acc30.3Unverified