SOTAVerified

Image-text Retrieval

Papers

Showing 201248 of 248 papers

TitleStatusHype
Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training0
Unified Multimodal Pre-training and Prompt-based Tuning for Vision-Language Understanding and Generation0
Uni-Mlip: Unified Self-supervision for Medical Vision Language Pre-training0
UNITER: Learning UNiversal Image-TExt Representations0
UrbanCross: Enhancing Satellite Image-Text Retrieval with Cross-Domain Adaptation0
Variance-Aware Loss Scheduling for Multimodal Alignment in Low-Data Settings0
ViLEM: Visual-Language Error Modeling for Image-Text Retrieval0
VL-BEiT: Generative Vision-Language Pretraining0
VLMAE: Vision-Language Masked Autoencoder0
VL-Match: Enhancing Vision-Language Pretraining with Token-Level and Instance-Level Matching0
Webly Supervised Joint Embedding for Cross-Modal Image-Text Retrieval0
Webly Supervised Joint Embedding for Cross-Modal lmage-Text Retrieval0
XGPT: Cross-modal Generative Pre-Training for Image Captioning0
Toward Automatic Relevance Judgment using Vision--Language Models for Image--Text Retrieval Evaluation0
HADA: A Graph-based Amalgamation Framework in Image-text RetrievalCode0
GSSF: Generalized Structural Sparse Function for Deep Cross-modal Metric LearningCode0
VL-Taboo: An Analysis of Attribute-based Zero-shot Capabilities of Vision-Language ModelsCode0
Object-Aware Query Perturbation for Cross-Modal Image-Text RetrievalCode0
Negative Sample is Negative in Its Own Way: Tailoring Negative Sentences for Image-Text RetrievalCode0
Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training BenchmarkCode0
NAPReg: Nouns As Proxies Regularization for Semantically Aware Cross-Modal EmbeddingsCode0
Embracing Language Inclusivity and Diversity in CLIP through Continual Language LearningCode0
MultiWay-Adapater: Adapting large-scale multi-modal models for scalable image-text retrievalCode0
Reversed in Time: A Novel Temporal-Emphasized Benchmark for Cross-Modal Video-Text RetrievalCode0
From Unimodal to Multimodal: Scaling up Projectors to Align ModalitiesCode0
Dissecting Deep Metric Learning Losses for Image-Text RetrievalCode0
RSVG: Exploring Data and Models for Visual Grounding on Remote Sensing DataCode0
Multi-stage Pre-training over Simplified Multimodal Pre-training ModelsCode0
FiCo-ITR: bridging fine-grained and coarse-grained image-text retrieval for comparative performance analysisCode0
Multilingual Vision-Language Pre-training for the Remote Sensing DomainCode0
Differentiable Outlier Detection Enable Robust Deep Multimodal AnalysisCode0
MHSAN: Multi-Head Self-Attention Network for Visual Semantic EmbeddingCode0
Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text RetrievalCode0
Adding simple structure at inference improves Vision-Language CompositionalityCode0
Attacking Attention of Foundation Models Disrupts Downstream TasksCode0
Semantic-Preserving Augmentation for Robust Image-Text RetrievalCode0
Intra-Modal Constraint Loss For Image-Text RetrievalCode0
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense FeaturesCode0
Single-Stream Multi-Level Alignment for Vision-Language PretrainingCode0
Stop Pre-Training: Adapt Visual-Language Models to Unseen LanguagesCode0
An Unsupervised Cross-Modal Hashing Method Robust to Noisy Training Image-Text Correspondences in Remote SensingCode0
Exposing and Mitigating Spurious Correlations for Cross-Modal RetrievalCode0
Enhancing Image-Text Matching with Adaptive Feature AggregationCode0
A Vision-Language Foundation Model for Leaf Disease IdentificationCode0
Integrating Listwise Ranking into Pairwise-based Image-Text RetrievalCode0
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural SupervisionCode0
USER: Unified Semantic Enhancement with Momentum Contrast for Image-Text RetrievalCode0
Improving the Consistency in Cross-Lingual Cross-Modal Retrieval with 1-to-K Contrastive LearningCode0
Show:102550
← PrevPage 5 of 5Next →

No leaderboard results yet.