SOTAVerified

Zero-Shot Video Question Answer

This task present the results of Zeroshot Question Answer results on TGIF-QA dataset for LLM powered Video Conversational Models.

Papers

Showing 125 of 85 papers

TitleStatusHype
MiniCPM-V: A GPT-4V Level MLLM on Your PhoneCode12
Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any ResolutionCode11
LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal ModelsCode7
Qwen2.5-Omni Technical ReportCode7
InternVideo2: Scaling Foundation Models for Multimodal Video UnderstandingCode7
Mistral 7BCode6
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction ModelCode5
VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsCode5
Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional TokenizationCode4
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense CaptioningCode4
Tarsier: Recipes for Training and Evaluating Large Video Description ModelsCode4
Video-LLaVA: Learning United Visual Representation by Alignment Before ProjectionCode4
MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual TokensCode4
Flamingo: a Visual Language Model for Few-Shot LearningCode4
InternVideo: General Video Foundation Models via Generative and Discriminative LearningCode4
VILA: On Pre-training for Visual Language ModelsCode4
LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision TokenCode4
mPLUG-Owl: Modularization Empowers Large Language Models with MultimodalityCode4
VideoChat: Chat-Centric Video UnderstandingCode4
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video UnderstandingCode4
Long Context Transfer from Language to VisionCode4
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language ModelsCode3
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video UnderstandingCode3
LongVU: Spatiotemporal Adaptive Compression for Long Video-Language UnderstandingCode3
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language ModelsCode3
Show:102550
← PrevPage 1 of 4Next →

No leaderboard results yet.