SOTAVerified

Image-text Retrieval

Papers

Showing 151175 of 248 papers

TitleStatusHype
Constructing Phrase-level Semantic Labels to Form Multi-GrainedSupervision for Image-Text Retrieval0
Context-Aware Attention Network for Image-Text Retrieval0
Continual learning in cross-modal retrieval0
Contrastive Feature Masking Open-Vocabulary Vision Transformer0
CosmoCLIP: Generalizing Large Vision-Language Models for Astronomical Imaging0
COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval0
CPL: Counterfactual Prompt Learning for Vision and Language Models0
Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset0
A New Fine-grained Alignment Method for Image-text Matching0
CtrlSynth: Controllable Image Text Synthesis for Data-Efficient Multimodal Learning0
DCFormer: Efficient 3D Vision-Language Modeling with Decomposed Convolutions0
Deep Semantic Multimodal Hashing Network for Scalable Image-Text and Video-Text Retrievals0
Direction-Oriented Visual-semantic Embedding Model for Remote Sensing Image-text Retrieval0
VladVA: Discriminative Fine-tuning of LVLMs0
Distill CLIP (DCLIP): Enhancing Image-Text Retrieval via Cross-Modal Transformer Distillation0
DLIP: Distilling Language-Image Pre-training0
Dual Relation Alignment for Composed Image Retrieval0
Dynamic Contrastive Distillation for Image-Text Retrieval0
Efficient Image Captioning for Edge Devices0
Efficient Image-Text Retrieval via Keyword-Guided Pre-Screening0
Efficient Multilingual Multi-modal Pre-training through Triple Contrastive Loss0
Enhancing Conceptual Understanding in Multimodal Contrastive Learning through Hard Negative Samples0
EvdCLIP: Improving Vision-Language Retrieval with Entity Visual Descriptions from Large Language Models0
EVE: Efficient Vision-Language Pre-training with Masked Prediction and Modality-Aware MoE0
Explaining and Mitigating the Modality Gap in Contrastive Multimodal Learning0
Show:102550
← PrevPage 7 of 10Next →

No leaderboard results yet.