SOTAVerified

Image-text Retrieval

Papers

Showing 201225 of 248 papers

TitleStatusHype
Learning to embed semantic similarity for joint image-text retrieval0
Efficient Multilingual Multi-modal Pre-training through Triple Contrastive Loss0
Re-Imagen: Retrieval-Augmented Text-to-Image Generator0
VL-Taboo: An Analysis of Attribute-based Zero-shot Capabilities of Vision-Language ModelsCode0
Revising Image-Text Retrieval via Multi-Modal Entailment0
CODER: Coupled Diversity-Sensitive Momentum Contrastive Learning for Image-Text Retrieval0
VLMAE: Vision-Language Masked Autoencoder0
Intra-Modal Constraint Loss For Image-Text RetrievalCode0
Dynamic Contrastive Distillation for Image-Text Retrieval0
VL-BEiT: Generative Vision-Language Pretraining0
Prompt-based Learning for Unpaired Image Captioning0
Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset0
HiVLP: Hierarchical Vision-Language Pre-Training for Fast Image-Text Retrieval0
Progressive Learning for Image Retrieval with Hybrid-Modality Queries0
COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval0
Robust Cross-Modal Representation Learning with Progressive Self-Distillation0
Image-text Retrieval: A Survey on Recent Research and Development0
Single-Stream Multi-Level Alignment for Vision-Language PretrainingCode0
LoopITR: Combining Dual and Cross Encoder Architectures for Image-Text Retrieval0
An Unsupervised Cross-Modal Hashing Method Robust to Noisy Training Image-Text Correspondences in Remote SensingCode0
CommerceMM: Large-Scale Commerce MultiModal Representation Learning with Omni Retrieval0
Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training BenchmarkCode0
Negative Sample is Negative in Its Own Way: Tailoring Negative Sentences for Image-Text Retrieval0
Unified Multimodal Pre-training and Prompt-based Tuning for Vision-Language Understanding and Generation0
UFO: A UniFied TransfOrmer for Vision-Language Representation Learning0
Show:102550
← PrevPage 9 of 10Next →

No leaderboard results yet.