SOTAVerified

TextVQA

Papers

Showing 2647 of 47 papers

TitleStatusHype
Enhancing Instruction-Following Capability of Visual-Language Models by Reducing Image Redundancy0
EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large Language Model0
FlexAttention for Efficient High-Resolution Vision-Language Models0
DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effective for LMMs0
OmniFusion Technical ReportCode0
Adversarial Training with OCR Modality Perturbation for Scene-Text Visual Question AnsweringCode0
VisLingInstruct: Elevating Zero-Shot Learning in Multi-Modal Language Models with Autonomous Instruction OptimizationCode0
Towards a Unified Multimodal Reasoning FrameworkCode0
Multiple-Question Multiple-Answer Text-VQA0
Exploring Sparse Spatial Relation in Graph Inference for Text-Based VQA0
Sentence Attention Blocks for Answer Grounding0
Separate and Locate: Rethink the Text in Text-based Visual Question AnsweringCode0
Making the V in Text-VQA Matter0
Locate Then Generate: Bridging Vision and Language with Bounding Box for Scene-Text VQA0
SceneGATE: Scene-Graph based co-Attention networks for TExt visual question answering0
Toward 3D Spatial Reasoning for Human-like Text-based Visual Question Answering0
Towards Escaping from Language Bias and OCR Error: Semantics-Centered Text Visual Question Answering0
Graph Relation Transformer: Incorporating pairwise object features into the Transformer architecture0
Winner Team Mia at TextVQA Challenge 2021: Vision-and-Language Representation Learning with Pre-trained Sequence-to-Sequence Model0
TextOCR: Towards large-scale end-to-end reasoning for arbitrary-shaped scene text0
Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCapsCode0
Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQACode0
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.