SOTAVerified

Passage Retrieval

Passage retrieval is a specialized type of IR application that retrieves relevant passages (or pieces of text) rather than an entire ranked set of documents.

Papers

Showing 101125 of 268 papers

TitleStatusHype
An Experimental Study on Pretraining Transformers from Scratch for IR0
Do the Findings of Document and Passage Retrieval Generalize to the Retrieval of Responses for Dialogues?Code1
HYRR: Hybrid Infused Reranking for Passage Retrieval0
Query-as-context Pre-training for Dense Passage RetrievalCode1
CAPSTONE: Curriculum Sampling for Dense Retrieval with Document Expansion0
PolQA: Polish Question Answering Dataset0
MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers0
Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single TransformerCode1
Open-Domain Conversational Question Answering with Historical AnswersCode0
Retrieval Oriented Masking Pre-training Language Model for Dense Passage RetrievalCode2
Bridging the Training-Inference Gap for Dense Phrase Retrieval0
Cross-document Event Coreference Search: Task, Dataset and ModelingCode1
Entity-Focused Dense Passage Retrieval for Outside-Knowledge Visual Question Answering0
Revisiting the Roles of "Text" in Text Games0
Retrieval Augmented Visual Question Answering with Outside KnowledgeCode2
Towards Robust Neural Retrieval with Source Domain Synthetic Pre-Finetuning0
On the Impact of Speech Recognition Errors in Passage Retrieval for Spoken Question AnsweringCode0
Zero-shot Event Causality Identification with Question Answering0
LexMAE: Lexicon-Bottlenecked Pretraining for Large-Scale RetrievalCode1
DPTDR: Deep Prompt Tuning for Dense Passage RetrievalCode0
ConTextual Masked Auto-Encoder for Dense Passage RetrievalCode1
Evaluating Dense Passage Retrieval using Transformers0
Aggretriever: A Simple Approach to Aggregate Textual Representations for Robust Dense Passage RetrievalCode1
SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval0
MIA 2022 Shared Task Submission: Leveraging Entity Representations, Dense-Sparse Hybrids, and Fusion-in-Decoder for Cross-Lingual Question Answering0
Show:102550
← PrevPage 5 of 11Next →

No leaderboard results yet.