SOTAVerified

Video-based Generative Performance Benchmarking (Contextual Understanding)

The benchmark evaluates a generative Video Conversational Model with respect to Contextual Understanding.

We curate a test set based on the ActivityNet-200 dataset, featuring videos with rich, dense descriptive captions and associated question-answer pairs from human annotations. We develop an evaluation pipeline using the GPT-3.5 model that assigns a relative score to the generated predictions on a scale of 1-5.

Papers

Showing 1116 of 16 papers

TitleStatusHype
MovieChat: From Dense Token to Sparse Memory for Long Video UnderstandingCode2
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language ModelsCode3
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video UnderstandingCode4
VideoChat: Chat-Centric Video UnderstandingCode4
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction ModelCode5
PLLay: Efficient Topological Layer based on Persistence LandscapesCode1
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.