SOTAVerified

Reranking

Papers

Showing 251300 of 586 papers

TitleStatusHype
TWOLAR: a TWO-step LLM-Augmented distillation method for passage RerankingCode0
InstUPR : Instruction-based Unsupervised Passage Reranking with Large Language ModelsCode0
A Thorough Comparison of Cross-Encoders and LLMs for Reranking SPLADE0
Local positional graphs and attentive local features for a data and runtime-efficient hierarchical place recognition pipeline0
LIST: Learning to Index Spatio-Textual Data for Embedding based Spatial Keyword Queries0
ToolRerank: Adaptive and Hierarchy-Aware Reranking for Tool Retrieval0
Assessing generalization capability of text ranking models in Polish0
Towards Trustworthy Reranking: A Simple yet Effective Abstention MechanismCode0
EcoRank: Budget-Constrained Text Re-ranking Using Large Language ModelsCode0
Multi-Query Focused Disaster Summarization via Instruction-Based Prompting0
Towards Unified Alignment Between Agents, Humans, and Environment0
Non-autoregressive Generative Models for Reranking Recommendation0
List-aware Reranking-Truncation Joint Model for Search and Retrieval-augmented GenerationCode0
eXplainable Bayesian Multi-Perspective Generative Retrieval0
RAG-Fusion: a New Take on Retrieval-Augmented Generation0
Re3val: Reinforced and Reranked Generative Retrieval0
Reranking individuals: The effect of fair classification within-groups0
Don't Rank, Combine! Combining Machine Translation Hypotheses Using Quality Estimation0
Using Natural Language Inference to Improve Persona Extraction from Dialogue in a New Domain0
Zero-Shot Cross-Lingual Reranking with Large Language Models for Low-Resource Languages0
Efficient Title Reranker for Fast and Improved Knowledge-Intense NLP0
Training-free Zero-shot Composed Image Retrieval with Local Concept Reranking0
Code Search Debiasing:Improve Search Results beyond Overall Ranking Performance0
Take One Step at a Time to Know Incremental Utility of Demonstration: An Analysis on Reranking for Few-Shot In-Context Learning0
Aligning Neural Machine Translation Models: Human Feedback in Training and Inference0
On Elastic Language Models0
Samsung R&D Institute Philippines at WMT 20230
Affective and Dynamic Beam Search for Story GenerationCode0
Strong and Efficient Baselines for Open Domain Conversational Question Answering0
An Empirical Study of Translation Hypothesis Ensembling with Large Language ModelsCode0
Alteration Detection of Tensor Dependence Structure via Sparsity-Exploited Reranking Algorithm0
Quality-Aware Translation Models: Efficient Generation and Quality Estimation in a Single Model0
Reranking for Natural Language Generation from Logical Forms: A Study based on Large Language Models0
MBR and QE Finetuning: Training-time Distillation of the Best and Most Expensive Decoding Methods0
Zero-shot Audio Topic Reranking using Large Language Models0
Reranking Passages with Coarse-to-Fine Neural Retriever Enhanced by List-Context Information0
Discrete Conditional Diffusion for Reranking in Recommendation0
Learning Evaluation Models from Large Language Models for Sequence GenerationCode0
Seasonality Based Reranking of E-commerce Autocomplete Using Natural Language Queries0
Lightweight reranking for language model generations0
Citations as Queries: Source Attribution Using Language Models as Rerankers0
Prompting Large Language Models for Zero-Shot Domain Adaptation in Speech Recognition0
How About Kind of Generating Hedges using End-to-End Neural Models?Code0
T5-SR: A Unified Seq-to-Seq Decoding Strategy for Semantic ParsingCode0
Towards Argument-Aware Abstractive Summarization of Long Legal Opinions with Summary RerankingCode0
EEL: Efficiently Encoding Lattices for RerankingCode0
Graph Exploration Matters: Improving both individual-level and system-level diversity in WeChat Feed Recommender0
Enhancing the Ranking Context of Dense Retrieval Methods through Reciprocal Nearest NeighborsCode0
Bidirectional Transformer Reranker for Grammatical Error CorrectionCode0
Accurate Knowledge Distillation with n-best Reranking0
Show:102550
← PrevPage 6 of 12Next →

No leaderboard results yet.