SOTAVerified

Visual Question Answering

MLLM Leaderboard

Papers

Showing 20762100 of 2177 papers

TitleStatusHype
Detecting Knowledge Boundary of Vision Large Language Models by Sampling-Based InferenceCode0
IIU: Independent Inference Units for Knowledge-based Visual Question AnsweringCode0
Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMsCode0
Visually Dehallucinative Instruction GenerationCode0
II-MMR: Identifying and Improving Multi-modal Multi-hop Reasoning in Visual Question AnsweringCode0
Treble Counterfactual VLMs: A Causal Approach to HallucinationCode0
Visually Grounded VQA by Lattice-based RetrievalCode0
Securing Vision-Language Models with a Robust Encoder Against Jailbreak and Adversarial AttacksCode0
Visually Interpretable Subtask Reasoning for Visual Question AnsweringCode0
Barlow constrained optimization for Visual Question AnsweringCode0
BabelBench: An Omni Benchmark for Code-Driven Analysis of Multimodal and Multistructured DataCode0
Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-trainingCode0
HumaniBench: A Human-Centric Framework for Large Multimodal Models EvaluationCode0
HRIBench: Benchmarking Vision-Language Models for Real-Time Human Perception in Human-Robot InteractionCode0
AVQACL: A Novel Benchmark for Audio-Visual Question Answering Continual LearningCode0
TUBench: Benchmarking Large Vision-Language Models on Trustworthiness with Unanswerable QuestionsCode0
Delving Deeper into Cross-lingual Visual Question AnsweringCode0
Why do These Match? Explaining the Behavior of Image Similarity ModelsCode0
Towards Flexible Evaluation for Generative Visual Question AnsweringCode0
Analyzing the Behavior of Visual Question Answering ModelsCode0
Select, Substitute, Search: A New Benchmark for Knowledge-Augmented Visual Question AnsweringCode0
Self-Critical Reasoning for Robust Visual Question AnsweringCode0
Visual Question Answering: A Survey of Methods and DatasetsCode0
WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language ModelsCode0
How to Determine the Preferred Image Distribution of a Black-Box Vision-Language Model?Code0
Show:102550
← PrevPage 84 of 88Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MMCTAgent (GPT-4 + GPT-4V)GPT-4 score74.24Unverified
2Qwen2-VL-72BGPT-4 score74Unverified
3InternVL2.5-78BGPT-4 score72.3Unverified
4GPT-4o +text rationale +IoTGPT-4 score72.2Unverified
5Lyra-ProGPT-4 score71.4Unverified
6GLM-4V-PlusGPT-4 score71.1Unverified
7Phantom-7BGPT-4 score70.8Unverified
8InternVL2.5-38BGPT-4 score68.8Unverified
9InternVL2-26B (SGP, token ratio 64%)GPT-4 score65.6Unverified
10Baichuan-Omni (7B)GPT-4 score65.4Unverified