SOTAVerified

Video-based Generative Performance Benchmarking (Temporal Understanding)

The benchmark evaluates a generative Video Conversational Model with respect to Temporal Understanding.

We curate a test set based on the ActivityNet-200 dataset, featuring videos with rich, dense descriptive captions and associated question-answer pairs from human annotations. We develop an evaluation pipeline using the GPT-3.5 model that assigns a relative score to the generated predictions on a scale of 1-5.

Papers

Showing 115 of 15 papers

TitleStatusHype
TS-LLaVA: Constructing Visual Tokens through Thumbnail-and-Sampling for Training-Free Video Large Language ModelsCode1
PPLLaVA: Varied Video Sequence Understanding With Prompt GuidanceCode2
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language ModelsCode3
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video UnderstandingCode3
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense CaptioningCode4
MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual TokensCode4
VTimeLLM: Empower LLM to Grasp Video MomentsCode2
MVBench: A Comprehensive Multi-modal Video Understanding BenchmarkCode2
Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video UnderstandingCode2
BT-Adapter: Video Conversation is Feasible Without Video Instruction TuningCode1
MovieChat: From Dense Token to Sparse Memory for Long Video UnderstandingCode2
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language ModelsCode3
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video UnderstandingCode4
VideoChat: Chat-Centric Video UnderstandingCode4
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction ModelCode5
Show:102550

No leaderboard results yet.