SOTAVerified

Visual Entailment

Visual Entailment (VE) - is a task consisting of image-sentence pairs whereby a premise is defined by an image, rather than a natural language sentence as in traditional Textual Entailment tasks. The goal is to predict whether the image semantically entails the text.

Papers

Showing 4150 of 56 papers

TitleStatusHype
Answer-Me: Multi-Task Open-Vocabulary Visual Question Answering0
Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks0
CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment0
Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment0
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning FrameworkCode0
CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks0
Logically at Factify 2022: Multimodal Fact Verification0
Unified Multimodal Pre-training and Prompt-based Tuning for Vision-Language Understanding and Generation0
How Much Can CLIP Benefit Vision-and-Language Tasks?0
Probing Inter-modality: Visual Parsing with Self-Attention for Vision-Language Pre-training0
Show:102550
← PrevPage 5 of 6Next →

No leaderboard results yet.