SOTAVerified

Video Question Answering

Papers

Showing 251300 of 460 papers

TitleStatusHype
Noise Estimation Using Density Estimation for Self-Supervised Multimodal LearningCode0
Video Question Answering on Screencast Tutorials0
Open-Ended Long-Form Video Question Answering via Hierarchical Convolutional Self-Attention Networks0
Video Question Answering Using CLIP-Guided Visual-Text Attention0
CRAFT: A Benchmark for Causal Reasoning About Forces and inTeractions0
Overview of the MedVidQA 2022 Shared Task on Medical Video Question-Answering0
Overview of the NLPCC 2025 Shared Task 4: Multi-modal, Multilingual, and Multi-hop Medical Instructional Video Question Answering Challenge0
Overview of TREC 2024 Medical Video Question Answering (MedVidQA) Track0
Video Question Answering Using Language-Guided Deep Compressed-Domain Video Feature0
Parameter-free Video Segmentation for Vision and Language Understanding0
Video Question Answering via Attribute-Augmented Attention Network Learning0
Pegasus-v1 Technical Report0
Perceive, Query & Reason: Enhancing Video QA with Question-Guided Temporal Queries0
Contrastive Video-Language Learning with Fine-grained Frame Sampling0
Perception Test 2023: A Summary of the First Challenge And Outcome0
Perception Test 2024: Challenge Summary and a Novel Hour-Long VideoQA Benchmark0
Continuous Perception Benchmark0
Composing Ensembles of Pre-trained Models via Iterative Consensus0
Commonsense Video Question Answering through Video-Grounded Entailment Tree Reasoning0
PolySmart @ TRECVid 2024 Medical Video Question Answering0
Poze: Sports Technique Feedback under Data Constraints0
CogStream: Context-guided Streaming Video Question Answering0
Prompting Video-Language Foundation Models with Domain-specific Fine-grained Heuristics for Video Question Answering0
QTG-VQA: Question-Type-Guided Architectural for VideoQA Systems0
CogME: A Cognition-Inspired Multi-Dimensional Evaluation Metric for Story Understanding0
Ranking Distillation for Open-Ended Video Question Answering with Insufficient Labels0
Read, Look or Listen? What's Needed for Solving a Multimodal Dataset0
ReasVQA: Advancing VideoQA with Imperfect Reasoning Process0
Recent Advances in Video Question Answering: A Review of Datasets and Methods0
Redundancy-aware Transformer for Video Question Answering0
Video Question Answering with Iterative Video-Text Co-Tokenization0
Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models0
CoCo-BERT: Improving Video-Language Pre-training with Contrastive Cross-modal Matching and Denoising0
Rethinking Multi-Modal Alignment in Video Question Answering from Feature and Sample Perspectives0
Retrieval-based Video Language Model for Efficient Long Video Question Answering0
Retrieving-to-Answer: Zero-Shot Video Question Answering with Frozen Large Language Models0
Co-attentional Transformers for Story-Based Video Understanding0
Video Question Answering with Phrases via Semantic Roles0
Video Question Generation via Cross-Modal Self-Attention Networks Learning0
AdaCM^2: On Understanding Extremely Long-Term Video with Adaptive Cross-Modality Memory Reduction0
Sample then Identify: A General Framework for Risk Control and Assessment in Multimodal Large Language Models0
Zero-Shot Long-Form Video Understanding through Screenplay0
VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners0
SEAL: Semantic Attention Learning for Long Video Representation0
Seed1.5-VL Technical Report0
Self-alignment of Large Video Language Models with Refined Regularized Preference Optimization0
Self-ReS: Self-Reflection in Large Vision-Language Models for Long Video Understanding0
Self-supervised pre-training and contrastive representation learning for multiple-choice video QA0
Semantic-aware Dynamic Retrospective-Prospective Reasoning for Event-level Video Question Answering0
Semi-Parametric Video-Grounded Text Generation0
Show:102550
← PrevPage 6 of 10Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LinVT-Qwen2-VL (7B)Accuracy85.5Unverified
2InternVL-2.5(8B)Accuracy85.5Unverified
3VideoLLaMA3(7B)Accuracy84.5Unverified
4PLM-8BAccuracy84.1Unverified
5BIMBA-LLaVA-Qwen2-7BAccuracy83.73Unverified
6PLM-3BAccuracy83.4Unverified
7LLaVA-VideoAccuracy83.2Unverified
8NVILA(8B)Accuracy82.2Unverified
9Oryx-1.5(7B)Accuracy81.8Unverified
10Qwen2-VL(7B)Accuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-2 + CLIP-14 + CLIP-multilingual (Zero-Shot)Accuracy61.2Unverified
2GPT-2 + CLIP-32 (Zero-Shot)Accuracy58.4Unverified
3VideoCoCaAccuracy56.1Unverified
4Mirasol3BAccuracy51.13Unverified
5VASTAccuracy50.4Unverified
6COSAAccuracy49.9Unverified
7MA-LMMAccuracy49.8Unverified
8VideoChat2Accuracy49.1Unverified
9VALORAccuracy48.6Unverified
10UMT-L (ViT-L/16)Accuracy47.9Unverified
#ModelMetricClaimedVerifiedStatus
1Seed1.5-VL thinkingAverage Accuracy63.6Unverified
2PLM-8BAverage Accuracy63.5Unverified
3Seed1.5-VLAverage Accuracy61.5Unverified
4V-JEPA 2 ViT-g 8BAverage Accuracy60.6Unverified
5PLM-3BAverage Accuracy58.9Unverified
6RRPOAverage Accuracy56.5Unverified
7Tarsier-34BAverage Accuracy55.5Unverified
8Tarsier2-7BAverage Accuracy54.7Unverified
9Qwen2-VL-72BAverage Accuracy52.7Unverified
10IXC-2.5 7BAverage Accuracy51.6Unverified
#ModelMetricClaimedVerifiedStatus
1LinVT-Qwen2-VL (7B)Avg.69.3Unverified
2Tarsier (34B)Avg.67.6Unverified
3InternVideo2Avg.67.2Unverified
4LongVU (7B)Avg.66.9Unverified
5Oryx(34B)Avg.64.7Unverified
6VideoLLaMA2 (72B)Avg.62Unverified
7VideoChat-T (7B)Avg.59.9Unverified
8mPLUG-Owl3(7B)Avg.59.5Unverified
9PPLLaVA (7b)Avg.59.2Unverified
10VideoGPT+Avg.58.7Unverified
#ModelMetricClaimedVerifiedStatus
1Mirasol3BAccuracy50.42Unverified
2VASTAccuracy50.1Unverified
3COSAAccuracy49.2Unverified
4VALORAccuracy49.2Unverified
5MA-LMMAccuracy48.5Unverified
6mPLUG-2Accuracy48Unverified
7FrozenBiLMAccuracy47Unverified
8HBIAccuracy46.2Unverified
9EMCL-NetAccuracy45.8Unverified
10VindLUAccuracy44.6Unverified
#ModelMetricClaimedVerifiedStatus
1VLAP (4 frames)Average Accuracy67.1Unverified
2LLaMA-VQAAverage Accuracy65.4Unverified
3SeViLAAverage Accuracy64.9Unverified
4InternVideoAverage Accuracy58.7Unverified
5GF(sup)Average Accuracy53.94Unverified
6GF(uns)Average Accuracy53.86Unverified
7MISTAverage Accuracy51.13Unverified
8Temp[ATP]Average Accuracy48.37Unverified
9AnyMAL-70B (0-shot)Average Accuracy48.2Unverified
10All-in-oneAverage Accuracy47.5Unverified
#ModelMetricClaimedVerifiedStatus
1Seed1.5-VLAVG60Unverified
2VideoChat-Online (4B)AVG54.9Unverified
3Gemini-1.5-FlashAVG50.7Unverified
4Qwen2-VL (7B)AVG49.7Unverified
5LLaVA-OneVision (7B)AVG49.5Unverified
6InternVL2 (7B)AVG48.7Unverified
7InternVL2 (4B)AVG44.1Unverified
8LongVA (7B)AVG43.6Unverified
9LLaMA-VID (7B)AVG41.9Unverified
10MiniCPM-V 2.6 (7B)AVG39.1Unverified
#ModelMetricClaimedVerifiedStatus
1GF (sup) - Faster RCNNAverage Accuracy55.08Unverified
2MIST - CLIPAverage Accuracy54.39Unverified
3GF (uns) - S3DAverage Accuracy53.33Unverified
4SViTTAverage Accuracy52.7Unverified
5MIST - AIOAverage Accuracy50.96Unverified
6SHG-VQA (trained from scratch)Average Accuracy49.2Unverified
7AIO - ViTAverage Accuracy48.59Unverified
8MMTFAverage Accuracy44.36Unverified
#ModelMetricClaimedVerifiedStatus
1Text + Text (no Multimodal Pretext Training)Accuracy93.2Unverified
2FrozenBiLMAccuracy86.7Unverified
3Just AskAccuracy84.4Unverified
4SeViLAAccuracy83.7Unverified
5Hero w/ pre-trainingAccuracy77.75Unverified
6ATPAccuracy65.1Unverified
7FrozenBiLM (0-shot)Accuracy58.4Unverified
8Just Ask (0-shot)Accuracy51.1Unverified