SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 5160 of 265 papers

TitleStatusHype
Balancing Immediate Revenue and Future Off-Policy Evaluation in Coupon Allocation0
Off-policy Evaluation with Deeply-abstracted StatesCode0
Automated Off-Policy Estimator Selection via Supervised Learning0
Confident Natural Policy Gradient for Local Planning in q_π-realizable Constrained MDPs0
Off-Policy Evaluation from Logged Human Feedback0
RL in Latent MDPs is Tractable: Online Guarantees via Off-Policy Evaluation0
A Fast Convergence Theory for Offline Decision Making0
Kernel Metric Learning for In-Sample Off-Policy Evaluation of Deterministic RL PoliciesCode0
OPERA: Automatic Offline Policy Evaluation with Re-weighted Aggregates of Multiple Estimators0
Cross-Validated Off-Policy EvaluationCode0
Show:102550
← PrevPage 6 of 27Next →

No leaderboard results yet.