SOTAVerified

MME

MME is a comprehensive evaluation benchmark for multimodal large language models. It measures both perception and cognition abilities on a total of 14 subtasks, including existence, count, position, color, poster, celebrity, scene, landmark, artwork, OCR, commonsense reasoning, numerical calculation, text translation, and code reasoning.

Papers

Showing 8190 of 95 papers

TitleStatusHype
ShareGPT4V: Improving Large Multi-Modal Models with Better CaptionsCode0
The Use of Symmetry for Models with Variable-size Variables0
Enhancing the Spatial Awareness Capability of Multi-Modal Large Language Model0
Benchmarking and In-depth Performance Study of Large Language Models on Habana Gaudi Processors0
InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and CompositionCode0
Domain Adaptation via Minimax Entropy for Real/Bogus Classification of Astronomical Alerts0
Multi-Modal Evaluation Approach for Medical Image Segmentation0
MAAL: Multimodality-Aware Autoencoder-Based Affordance Learning for 3D Articulated ObjectsCode0
MM-GNN: Mix-Moment Graph Neural Network towards Modeling Neighborhood Feature DistributionCode0
MME-CRS: Multi-Metric Evaluation Based on Correlation Re-Scaling for Evaluating Open-Domain Dialogue0
Show:102550
← PrevPage 9 of 10Next →

No leaderboard results yet.