SOTAVerified

TextVQA

Papers

Showing 125 of 47 papers

TitleStatusHype
CogVLM2: Visual Language Models for Image and Video UnderstandingCode9
TextMonkey: An OCR-Free Large Multimodal Model for Understanding DocumentCode5
CogVLM: Visual Expert for Pretrained Language ModelsCode5
Towards VQA Models That Can ReadCode3
LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution ImagesCode3
Lyra: An Efficient and Speech-Centric Framework for Omni-CognitionCode3
Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language ModelsCode3
What Kind of Visual Tokens Do We Need? Training-free Visual Token Pruning for Multi-modal Large Language Models from the Perspective of GraphCode2
Parameter-Inverted Image Pyramid Networks for Visual Perception and Multimodal UnderstandingCode2
Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language ModelsCode2
TAG: Boosting Text-VQA via Text-aware Visual Question-answer GenerationCode1
LaTr: Layout-Aware Transformer for Scene-Text VQACode1
RUArt: A Novel Text-Centered Solution for Text-Based Visual Question AnsweringCode1
Mitigating Object Hallucinations via Sentence-Level Early InterventionCode1
A First Look: Towards Explainable TextVQA Models via Visual and Textual ExplanationsCode1
Spatially Aware Multimodal Transformers for TextVQACode1
Structured Multimodal Attentions for TextVQACode1
TAP: Text-Aware Pre-training for Text-VQA and Text-CaptionCode1
Adversarial Training with OCR Modality Perturbation for Scene-Text Visual Question AnsweringCode0
OmniFusion Technical ReportCode0
Instruction-Aligned Visual Attention for Mitigating Hallucinations in Large Vision-Language ModelsCode0
Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQACode0
Towards a Unified Multimodal Reasoning FrameworkCode0
Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCapsCode0
Separate and Locate: Rethink the Text in Text-based Visual Question AnsweringCode0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.