SOTAVerified

Visual Entailment

Visual Entailment (VE) - is a task consisting of image-sentence pairs whereby a premise is defined by an image, rather than a natural language sentence as in traditional Textual Entailment tasks. The goal is to predict whether the image semantically entails the text.

Papers

Showing 3140 of 56 papers

TitleStatusHype
Playing Lottery Tickets with Vision and Language0
Pre-training image-language transformers for open-vocabulary tasks0
Probing Inter-modality: Visual Parsing with Self-Attention for Vision-Language Pre-training0
Probing Inter-modality: Visual Parsing with Self-Attention for Vision-and-Language Pre-training0
Few-shot Multimodal Multitask Multilingual Learning0
Segment-Phrase Table for Semantic Segmentation, Visual Entailment and Paraphrasing0
How Much Can CLIP Benefit Vision-and-Language Tasks?0
Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning0
Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment0
AlignVE: Visual Entailment Recognition Based on Alignment Relations0
Show:102550
← PrevPage 4 of 6Next →

No leaderboard results yet.