SOTAVerified

Video-based Generative Performance Benchmarking (Correctness of Information)

The benchmark evaluates a generative Video Conversational Model with respect to Correctness of Information.

We curate a test set based on the ActivityNet-200 dataset, featuring videos with rich, dense descriptive captions and associated question-answer pairs from human annotations. We develop an evaluation pipeline using the GPT-3.5 model that assigns a relative score to the generated predictions on a scale of 1-5.

Papers

Showing 110 of 15 papers

TitleStatusHype
TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language ModelsCode1
PPLLaVA: Varied Video Sequence Understanding With Prompt GuidanceCode2
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language ModelsCode3
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video UnderstandingCode3
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense CaptioningCode4
MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual TokensCode4
VTimeLLM: Empower LLM to Grasp Video MomentsCode2
MVBench: A Comprehensive Multi-modal Video Understanding BenchmarkCode2
Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video UnderstandingCode2
BT-Adapter: Video Conversation is Feasible Without Video Instruction TuningCode1
Show:102550
← PrevPage 1 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PPLLaVA-7Bgpt-score3.85Unverified
2PLLaVA-34Bgpt-score3.6Unverified
3TS-LLaVA-34Bgpt-score3.55Unverified
4SlowFast-LLaVA-34Bgpt-score3.48Unverified
5VideoChat2_HD_mistralgpt-score3.4Unverified
6VideoGPT+gpt-score3.27Unverified
7ST-LLMgpt-score3.23Unverified
8MiniGPT4-video-7Bgpt-score3.08Unverified
9VideoChat2gpt-score3.02Unverified
10Chat-UniVigpt-score2.89Unverified