SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 221230 of 265 papers

TitleStatusHype
Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions0
Minimax Value Interval for Off-Policy Evaluation and Policy Optimization0
Safe Exploration for Optimizing Contextual BanditsCode0
Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning0
Double Reinforcement Learning for Efficient and Robust Off-Policy Evaluation0
Accountable Off-Policy Evaluation via a Kernelized Bellman Statistics0
More Efficient Off-Policy Evaluation through Regularized Targeted Learning0
Triply Robust Off-Policy Evaluation0
Minimax Weight and Q-Function Learning for Off-Policy Evaluation0
From Importance Sampling to Doubly Robust Policy GradientCode0
Show:102550
← PrevPage 23 of 27Next →

No leaderboard results yet.