SOTAVerified

Video Question Answering

Papers

Showing 301350 of 460 papers

TitleStatusHype
BDIQA: A New Dataset for Video Question Answering to Explore Cognitive Reasoning through Theory of Mind0
YTCommentQA: Video Question Answerability in Instructional VideosCode0
STAIR: Spatial-Temporal Reasoning with Auditable Intermediate Results for Video Question AnsweringCode0
Answering from Sure to Uncertain: Uncertainty-Aware Curriculum Learning for Video Question Answering0
Language-aware Visual Semantic Distillation for Video Question Answering0
VISTA-LLAMA: Reducing Hallucination in Video Language Models via Equal Distance to Visual Tokens0
On Scaling Up a Multilingual Vision and Language Model0
Cross-Modal Reasoning with Event Correlation for Video Question Answering0
Perception Test 2023: A Summary of the First Challenge And Outcome0
Text-Conditioned Resampler For Long Form Video Understanding0
Vista-LLaMA: Reliable Video Narrator via Equal Distance to Visual Tokens0
MoVQA: A Benchmark of Versatile Question-Answering for Long-Form Movie Understanding0
Retrieval-based Video Language Model for Efficient Long Video Question Answering0
VaQuitA: Enhancing Alignment in LLM-Assisted Video Understanding0
Zero-Shot Video Question Answering with Procedural Programs0
E-ViLM: Efficient Video-Language Model via Masked Video Modeling with Semantic Vector-Quantized Tokenizer0
Characterizing Video Question Answering with Sparsified Inputs0
GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation0
Vamos: Versatile Action Models for Video UnderstandingCode0
Mirasol3B: A Multimodal Autoregressive model for time-aligned and contextual modalities0
Long Story Short: a Summarize-then-Search Method for Long Video Question AnsweringCode0
ACQUIRED: A Dataset for Answering Counterfactual Questions In Real-Life VideosCode0
Modular Blended Attention Network for Video Question Answering0
Harvest Video Foundation Models via Efficient Post-PretrainingCode0
Analyzing Zero-Shot Abilities of Vision-Language Models on Video Understanding TasksCode0
MMTF: Multi-Modal Temporal Fusion for Commonsense Video Question Answering0
ATM: Action Temporality Modeling for Video Question Answering0
Understanding Video Scenes through Text: Insights from Text-based Video Question Answering0
Distraction-free Embeddings for Robust VQA0
Redundancy-aware Transformer for Video Question Answering0
Keyword-Aware Relative Spatio-Temporal Graph Networks for Video Question Answering0
Traffic-Domain Video Question Answering with Automatic Captioning0
Reading Between the Lanes: Text VideoQA on the RoadCode0
Read, Look or Listen? What's Needed for Solving a Multimodal Dataset0
Lightweight Recurrent Cross-modal Encoder for Video Question AnsweringCode0
Retrieving-to-Answer: Zero-Shot Video Question Answering with Frozen Large Language Models0
Diversifying Joint Vision-Language Tokenization Learning0
VLAB: Enhancing Video Language Pre-training by Feature Adapting and Blending0
TG-VQA: Ternary Game of Video Question Answering0
Is a Video worth n n Images? A Highly Efficient Approach to Transformer-based Video Question Answering0
Semantic-aware Dynamic Retrospective-Prospective Reasoning for Event-level Video Question Answering0
ANetQA: A Large-scale Benchmark for Fine-grained Compositional Reasoning over Untrimmed VideosCode0
VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation0
A Review of Deep Learning for Video Captioning0
Verbs in Action: Improving verb understanding in video-language modelsCode0
Language Models are Causal Knowledge Extractors for Zero-shot Video Question Answering0
MaMMUT: A Simple Architecture for Joint Learning for MultiModal TasksCode0
Structured Video-Language Modeling with Temporal Grouping and Spatial Grounding0
Unmasked Teacher: Towards Training-Efficient Video Foundation ModelsCode0
MuLTI: Efficient Video-and-Language Understanding with Text-Guided MultiWay-Sampler and Multiple Choice Modeling0
Show:102550
← PrevPage 7 of 10Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LinVT-Qwen2-VL (7B)Accuracy85.5Unverified
2InternVL-2.5(8B)Accuracy85.5Unverified
3VideoLLaMA3(7B)Accuracy84.5Unverified
4PLM-8BAccuracy84.1Unverified
5BIMBA-LLaVA-Qwen2-7BAccuracy83.73Unverified
6PLM-3BAccuracy83.4Unverified
7LLaVA-VideoAccuracy83.2Unverified
8NVILA(8B)Accuracy82.2Unverified
9Oryx-1.5(7B)Accuracy81.8Unverified
10Qwen2-VL(7B)Accuracy81.2Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-2 + CLIP-14 + CLIP-multilingual (Zero-Shot)Accuracy61.2Unverified
2GPT-2 + CLIP-32 (Zero-Shot)Accuracy58.4Unverified
3VideoCoCaAccuracy56.1Unverified
4Mirasol3BAccuracy51.13Unverified
5VASTAccuracy50.4Unverified
6COSAAccuracy49.9Unverified
7MA-LMMAccuracy49.8Unverified
8VideoChat2Accuracy49.1Unverified
9VALORAccuracy48.6Unverified
10UMT-L (ViT-L/16)Accuracy47.9Unverified
#ModelMetricClaimedVerifiedStatus
1Seed1.5-VL thinkingAverage Accuracy63.6Unverified
2PLM-8BAverage Accuracy63.5Unverified
3Seed1.5-VLAverage Accuracy61.5Unverified
4V-JEPA 2 ViT-g 8BAverage Accuracy60.6Unverified
5PLM-3BAverage Accuracy58.9Unverified
6RRPOAverage Accuracy56.5Unverified
7Tarsier-34BAverage Accuracy55.5Unverified
8Tarsier2-7BAverage Accuracy54.7Unverified
9Qwen2-VL-72BAverage Accuracy52.7Unverified
10IXC-2.5 7BAverage Accuracy51.6Unverified
#ModelMetricClaimedVerifiedStatus
1LinVT-Qwen2-VL (7B)Avg.69.3Unverified
2Tarsier (34B)Avg.67.6Unverified
3InternVideo2Avg.67.2Unverified
4LongVU (7B)Avg.66.9Unverified
5Oryx(34B)Avg.64.7Unverified
6VideoLLaMA2 (72B)Avg.62Unverified
7VideoChat-T (7B)Avg.59.9Unverified
8mPLUG-Owl3(7B)Avg.59.5Unverified
9PPLLaVA (7b)Avg.59.2Unverified
10VideoGPT+Avg.58.7Unverified
#ModelMetricClaimedVerifiedStatus
1Mirasol3BAccuracy50.42Unverified
2VASTAccuracy50.1Unverified
3COSAAccuracy49.2Unverified
4VALORAccuracy49.2Unverified
5MA-LMMAccuracy48.5Unverified
6mPLUG-2Accuracy48Unverified
7FrozenBiLMAccuracy47Unverified
8HBIAccuracy46.2Unverified
9EMCL-NetAccuracy45.8Unverified
10VindLUAccuracy44.6Unverified
#ModelMetricClaimedVerifiedStatus
1VLAP (4 frames)Average Accuracy67.1Unverified
2LLaMA-VQAAverage Accuracy65.4Unverified
3SeViLAAverage Accuracy64.9Unverified
4InternVideoAverage Accuracy58.7Unverified
5GF(sup)Average Accuracy53.94Unverified
6GF(uns)Average Accuracy53.86Unverified
7MISTAverage Accuracy51.13Unverified
8Temp[ATP]Average Accuracy48.37Unverified
9AnyMAL-70B (0-shot)Average Accuracy48.2Unverified
10All-in-oneAverage Accuracy47.5Unverified
#ModelMetricClaimedVerifiedStatus
1Seed1.5-VLAVG60Unverified
2VideoChat-Online (4B)AVG54.9Unverified
3Gemini-1.5-FlashAVG50.7Unverified
4Qwen2-VL (7B)AVG49.7Unverified
5LLaVA-OneVision (7B)AVG49.5Unverified
6InternVL2 (7B)AVG48.7Unverified
7InternVL2 (4B)AVG44.1Unverified
8LongVA (7B)AVG43.6Unverified
9LLaMA-VID (7B)AVG41.9Unverified
10MiniCPM-V 2.6 (7B)AVG39.1Unverified
#ModelMetricClaimedVerifiedStatus
1GF (sup) - Faster RCNNAverage Accuracy55.08Unverified
2MIST - CLIPAverage Accuracy54.39Unverified
3GF (uns) - S3DAverage Accuracy53.33Unverified
4SViTTAverage Accuracy52.7Unverified
5MIST - AIOAverage Accuracy50.96Unverified
6SHG-VQA (trained from scratch)Average Accuracy49.2Unverified
7AIO - ViTAverage Accuracy48.59Unverified
8MMTFAverage Accuracy44.36Unverified
#ModelMetricClaimedVerifiedStatus
1Text + Text (no Multimodal Pretext Training)Accuracy93.2Unverified
2FrozenBiLMAccuracy86.7Unverified
3Just AskAccuracy84.4Unverified
4SeViLAAccuracy83.7Unverified
5Hero w/ pre-trainingAccuracy77.75Unverified
6ATPAccuracy65.1Unverified
7FrozenBiLM (0-shot)Accuracy58.4Unverified
8Just Ask (0-shot)Accuracy51.1Unverified