SOTAVerified

Retrieval

A methodology that involves selecting relevant data or examples from a large dataset to support tasks like prediction, learning, or inference. It enhances models by providing context or additional information, often used in systems like retrieval-augmented generation or in-context learning.

Papers

Showing 526550 of 14297 papers

TitleStatusHype
RetroMAE v2: Duplex Masked Auto-Encoder For Pre-Training Retrieval-Oriented Language ModelsCode2
MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain ConversationCode2
Body Part-Based Representation Learning for Occluded Person Re-IdentificationCode2
When Language Model Meets Private LibraryCode2
Retrieval Oriented Masking Pre-training Language Model for Dense Passage RetrievalCode2
PoseScript: Linking 3D Human Poses and Natural LanguageCode2
MuGER^2: Multi-Granularity Evidence Retrieval and Reasoning for Hybrid Question AnsweringCode2
MedCLIP: Contrastive Learning from Unpaired Medical Images and TextCode2
Making a MIRACL: Multilingual Information Retrieval Across a Continuum of LanguagesCode2
Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive LearningCode2
Retrieval Augmented Visual Question Answering with Outside KnowledgeCode2
Content-Based Search for Deep Generative ModelsCode2
When and why vision-language models behave like bags-of-words, and what to do about it?Code2
Contrastive Audio-Visual Masked AutoencoderCode2
Diffusion Posterior Sampling for General Noisy Inverse ProblemsCode2
Multilingual Search with Subword TF-IDFCode2
CLIP-ViP: Adapting Pre-trained Image-Text Model to Video-Language Representation AlignmentCode2
Flow-Guided Transformer for Video InpaintingCode2
Simplified State Space Layers for Sequence ModelingCode2
Atlas: Few-shot Learning with Retrieval Augmented Language ModelsCode2
Tip-Adapter: Training-free Adaption of CLIP for Few-shot ClassificationCode2
Egocentric Video-Language Pretraining @ EPIC-KITCHENS-100 Multi-Instance Retrieval Challenge 2022Code2
Comprehending and Ordering Semantics for Image CaptioningCode2
Uni-Perceiver-MoE: Learning Sparse Generalist Models with Conditional MoEsCode2
Revealing Single Frame Bias for Video-and-Language LearningCode2
Show:102550
← PrevPage 22 of 572Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1BM25SQueries per second183.53Unverified
2ElasticsearchQueries per second21.8Unverified
3BM25-PTQueries per second6.49Unverified
4Rank-BM25Queries per second1.18Unverified
#ModelMetricClaimedVerifiedStatus
1BM25SQueries per second20.88Unverified
2ElasticsearchQueries per second7.11Unverified
3Rank-BM25Queries per second0.04Unverified
#ModelMetricClaimedVerifiedStatus
1BM25SQueries per second41.85Unverified
2ElasticsearchQueries per second12.16Unverified
3Rank-BM25Queries per second0.1Unverified
#ModelMetricClaimedVerifiedStatus
1FLMRRecall@589.32Unverified
2RA-VQARecall@582.84Unverified
#ModelMetricClaimedVerifiedStatus
1PreFLMRRecall@562.1Unverified
#ModelMetricClaimedVerifiedStatus
1CLIP-KIStext-to-video Mean Rank30Unverified
#ModelMetricClaimedVerifiedStatus
1CLIP4OutfitRecall@57.59Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAGAccuracy (Top-1)82.1Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAGAccuracy (Top-1)82.1Unverified
#ModelMetricClaimedVerifiedStatus
1COLTCOMP@84.55Unverified
#ModelMetricClaimedVerifiedStatus
1hello0L1,121,222Unverified