SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 171180 of 265 papers

TitleStatusHype
Accountable Off-Policy Evaluation via a Kernelized Bellman Statistics0
Accountable Off-Policy Evaluation With Kernel Bellman Statistics0
Adaptive Trade-Offs in Off-Policy Learning0
A maximum-entropy approach to off-policy evaluation in average-reward MDPs0
An Instrumental Variable Approach to Confounded Off-Policy Evaluation0
A Practical Guide of Off-Policy Evaluation for Bandit Problems0
A Principled Path to Fitted Distributional Evaluation0
A Review of Off-Policy Evaluation in Reinforcement Learning0
A Spectral Approach to Off-Policy Evaluation for POMDPs0
Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning0
Show:102550
← PrevPage 18 of 27Next →

No leaderboard results yet.