SOTAVerified

Retrieval

A methodology that involves selecting relevant data or examples from a large dataset to support tasks like prediction, learning, or inference. It enhances models by providing context or additional information, often used in systems like retrieval-augmented generation or in-context learning.

Papers

Showing 32263250 of 14297 papers

TitleStatusHype
Cross-Modal Learning with Adversarial SamplesCode0
Accelerating Hopfield Network Dynamics: Beyond Synchronous Updates and Forward EulerCode0
Learning a Repression Network for Precise Vehicle SearchCode0
CSCPR: Cross-Source-Context Indoor RGB-D Place RecognitionCode0
An All-MLP Sequence Modeling Architecture That Excels at CopyingCode0
Learning Audio Concepts from Counterfactual Natural LanguageCode0
Learning Compatible Multi-Prize Subnetworks for Asymmetric RetrievalCode0
Cross-Modality Sub-Image Retrieval using Contrastive Multimodal Image RepresentationsCode0
Learning a metric for class-conditional KNNCode0
AMuRD: Annotated Arabic-English Receipt Dataset for Key Information Extraction and ClassificationCode0
Cross-Modal Interaction Networks for Query-Based Moment Retrieval in VideosCode0
Learning a Hierarchical Latent-Variable Model of 3D ShapesCode0
Learned k-NN Distance EstimationCode0
Accelerating Generalized Linear Models with MLWeaving: A One-Size-Fits-All System for Any-precision Learning (Technical Report)Code0
Learnable PINs: Cross-Modal Embeddings for Person IdentityCode0
Learning a Deep Listwise Context Model for Ranking RefinementCode0
AutoCast++: Enhancing World Event Prediction with Zero-shot Ranking-based Context RetrievalCode0
Cross-modal Embeddings for Video and Audio RetrievalCode0
LCD: Learned Cross-Domain Descriptors for 2D-3D MatchingCode0
Authorship verification in the absence of explicit features and thresholdsCode0
Learning to compress and search visual data in large-scale systemsCode0
Learning Deep Local Features With Multiple Dynamic Attentions for Large-Scale Image RetrievalCode0
Cross-modal Contrastive Learning with Asymmetric Co-attention Network for Video Moment RetrievalCode0
Exploiting Local Indexing and Deep Feature Confidence Scores for Fast Image-to-Video SearchCode0
Latent Structured Hopfield Network for Semantic Association and RetrievalCode0
Show:102550
← PrevPage 130 of 572Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1BM25SQueries per second183.53Unverified
2ElasticsearchQueries per second21.8Unverified
3BM25-PTQueries per second6.49Unverified
4Rank-BM25Queries per second1.18Unverified
#ModelMetricClaimedVerifiedStatus
1BM25SQueries per second20.88Unverified
2ElasticsearchQueries per second7.11Unverified
3Rank-BM25Queries per second0.04Unverified
#ModelMetricClaimedVerifiedStatus
1BM25SQueries per second41.85Unverified
2ElasticsearchQueries per second12.16Unverified
3Rank-BM25Queries per second0.1Unverified
#ModelMetricClaimedVerifiedStatus
1FLMRRecall@589.32Unverified
2RA-VQARecall@582.84Unverified
#ModelMetricClaimedVerifiedStatus
1PreFLMRRecall@562.1Unverified
#ModelMetricClaimedVerifiedStatus
1CLIP-KIStext-to-video Mean Rank30Unverified
#ModelMetricClaimedVerifiedStatus
1CLIP4OutfitRecall@57.59Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAGAccuracy (Top-1)82.1Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAGAccuracy (Top-1)82.1Unverified
#ModelMetricClaimedVerifiedStatus
1COLTCOMP@84.55Unverified
#ModelMetricClaimedVerifiedStatus
1hello0L1,121,222Unverified