SOTAVerified

Video-based Generative Performance Benchmarking (Temporal Understanding)

The benchmark evaluates a generative Video Conversational Model with respect to Temporal Understanding.

We curate a test set based on the ActivityNet-200 dataset, featuring videos with rich, dense descriptive captions and associated question-answer pairs from human annotations. We develop an evaluation pipeline using the GPT-3.5 model that assigns a relative score to the generated predictions on a scale of 1-5.

Papers

Showing 1115 of 15 papers

TitleStatusHype
MVBench: A Comprehensive Multi-modal Video Understanding BenchmarkCode2
Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video UnderstandingCode2
MovieChat: From Dense Token to Sparse Memory for Long Video UnderstandingCode2
TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language ModelsCode1
BT-Adapter: Video Conversation is Feasible Without Video Instruction TuningCode1
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.