SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 231240 of 265 papers

TitleStatusHype
Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation0
Adaptive Trade-Offs in Off-Policy Learning0
Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling0
Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning0
Off-Policy Evaluation in Partially Observable Environments0
Efron-Stein PAC-Bayesian Inequalities0
Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision ProcessesCode0
Doubly robust off-policy evaluation with shrinkage0
Task Selection Policies for Multitask Learning0
Expected Sarsa(λ) with Control Variate for Variance Reduction0
Show:102550
← PrevPage 24 of 27Next →

No leaderboard results yet.