SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 221230 of 265 papers

TitleStatusHype
Off-Policy Evaluation with Out-of-Sample GuaranteesCode0
Two-way Deconfounder for Off-policy Evaluation in Causal Reinforcement LearningCode0
Control Variates for Slate Off-Policy EvaluationCode0
Robust Generalization despite Distribution Shift via Minimum Discriminating InformationCode0
Robust Offline Reinforcement learning with Heavy-Tailed RewardsCode0
State Relevance for Off-Policy EvaluationCode0
Off-Policy Evaluation and Learning for External Validity under a Covariate ShiftCode0
Counterfactual Evaluation of Peer-Review Assignment PoliciesCode0
A Multi-Agent Reinforcement Learning Framework for Off-Policy Evaluation in Two-sided MarketsCode0
Safe Exploration for Optimizing Contextual BanditsCode0
Show:102550
← PrevPage 23 of 27Next →

No leaderboard results yet.