SOTAVerified

Visual Entailment

Visual Entailment (VE) - is a task consisting of image-sentence pairs whereby a premise is defined by an image, rather than a natural language sentence as in traditional Textual Entailment tasks. The goal is to predict whether the image semantically entails the text.

Papers

Showing 4150 of 56 papers

TitleStatusHype
Distilled Dual-Encoder Model for Vision-Language UnderstandingCode1
Unified Multimodal Pre-training and Prompt-based Tuning for Vision-Language Understanding and Generation0
How Much Can CLIP Benefit Vision-and-Language Tasks?0
Check It Again:Progressive Visual Question Answering via Visual EntailmentCode1
How Much Can CLIP Benefit Vision-and-Language Tasks?Code1
Probing Inter-modality: Visual Parsing with Self-Attention for Vision-Language Pre-training0
Check It Again: Progressive Visual Question Answering via Visual EntailmentCode1
Probing Inter-modality: Visual Parsing with Self-Attention for Vision-and-Language Pre-training0
Playing Lottery Tickets with Vision and Language0
Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation LearningCode1
Show:102550
← PrevPage 5 of 6Next →

No leaderboard results yet.