SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 211220 of 265 papers

TitleStatusHype
Efficient Evaluation of Natural Stochastic Policies in Offline Reinforcement Learning0
Causality and Batch Reinforcement Learning: Complementary Approaches To Planning In Unknown Domains0
Taylor Expansion Policy Optimization0
Batch Stationary Distribution EstimationCode0
Off-Policy Evaluation and Learning for External Validity under a Covariate ShiftCode0
Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation0
Debiased Off-Policy Evaluation for Recommendation Systems0
Adaptive Estimator Selection for Off-Policy EvaluationCode0
Double/Debiased Machine Learning for Dynamic Treatment Effects via g-Estimation0
Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning0
Show:102550
← PrevPage 22 of 27Next →

No leaderboard results yet.