SOTAVerified

Video-based Generative Performance Benchmarking

The benchmark evaluates a generative Video Conversational Model and covers five key aspects:

  • Correctness of Information
  • Detailed Orientation
  • Contextual Understanding
  • Temporal Understanding
  • Consistency

We curate a test set based on the ActivityNet-200 dataset, featuring videos with rich, dense descriptive captions and associated question-answer pairs from human annotations. We develop an evaluation pipeline using the GPT-3.5 model that assigns a relative score to the generated predictions on a scale of 1-5.

Papers

Showing 110 of 20 papers

TitleStatusHype
TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language ModelsCode1
PPLLaVA: Varied Video Sequence Understanding With Prompt GuidanceCode2
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language ModelsCode3
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video UnderstandingCode3
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense CaptioningCode4
ST-LLM: Large Language Models Are Effective Temporal LearnersCode2
An Image Grid Can Be Worth a Video: Zero-shot Video Question Answering Using a VLMCode2
LITA: Language Instructed Temporal-Localization AssistantCode2
CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual ScenariosCode2
Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI FeedbackCode2
Show:102550
← PrevPage 1 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PPLLaVA-7B-dpomean3.73Unverified
2VLM-RLAIFmean3.49Unverified
3TS-LLaVA-34Bmean3.38Unverified
4PLLaVA-34Bmean3.32Unverified
5PPLLaVA-7Bmean3.32Unverified
6SlowFast-LLaVA-34Bmean3.32Unverified
7VideoGPT+mean3.28Unverified
8IG-VLM-GPT4vmean3.17Unverified
9ST-LLM-7Bmean3.15Unverified
10VideoChat2_HD_mistralmean3.1Unverified