SOTAVerified

Gamified crowd-sourcing of high-quality data for visual fine-tuning

2024-10-05Unverified0· sign in to hype

Shashank Yadav, Rohan Tomar, Garvit Jain, Chirag Ahooja, Shubham Chaudhary, Charles Elkan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper introduces Gamified Adversarial Prompting (GAP), a framework that crowd-sources high-quality data for visual instruction tuning of large multimodal models. GAP transforms the data collection process into an engaging game, incentivizing players to provide fine-grained, challenging questions and answers that target gaps in the model's knowledge. Our contributions include (1) an approach to capture question-answer pairs from humans that directly address weaknesses in a model's knowledge, (2) a method for evaluating and rewarding players that successfully incentivizes them to provide high-quality submissions, and (3) a scalable, gamified platform that succeeds in collecting this data from over 50,000 participants in just a few weeks. Our implementation of GAP has significantly improved the accuracy of a small multimodal model, namely MiniCPM-Llama3-V-2.5-8B, increasing its GPT score from 0.147 to 0.477 on our dataset, approaching the benchmark set by the much larger GPT-4V. Moreover, we demonstrate that the data generated using MiniCPM-Llama3-V-2.5-8B also enhances its performance across other benchmarks, and exhibits cross-model benefits. Specifically, the same data improves the performance of QWEN2-VL-2B and QWEN2-VL-7B on the same multiple benchmarks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MM-VetQwen2-VL-7B (finetuned on GAP-VQA train)GPT-4 score64.95Unverified
MM-VetQwen2-VL-2B (finetuned on GAP-VQA train)GPT-4 score52.43Unverified
MM-VetMiniCPM-Llama3-V-2.5-8B (finetuned on GAP-VQA train)GPT-4 score51.79Unverified

Reproductions