SOTAVerified

Visual Entailment

Visual Entailment (VE) - is a task consisting of image-sentence pairs whereby a premise is defined by an image, rather than a natural language sentence as in traditional Textual Entailment tasks. The goal is to predict whether the image semantically entails the text.

Papers

Showing 3140 of 56 papers

TitleStatusHype
Answer-Me: Multi-Task Open-Vocabulary Visual Question Answering0
Visual Spatial ReasoningCode1
Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks0
Fine-Grained Visual EntailmentCode1
CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment0
NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language TasksCode1
Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment0
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning FrameworkCode0
CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks0
Logically at Factify 2022: Multimodal Fact Verification0
Show:102550
← PrevPage 4 of 6Next →

No leaderboard results yet.