SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 131140 of 265 papers

TitleStatusHype
Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling0
Variance-Aware Off-Policy Evaluation with Linear Function Approximation0
Wasserstein Distributionally Robust Policy Evaluation and Learning for Contextual Bandits0
Weighted model estimation for offline model-based reinforcement learning0
Why Should I Trust You, Bellman? Evaluating the Bellman Objective with Off-Policy Data0
Data-Driven Off-Policy Estimator Selection: An Application in User Marketing on An Online Content Delivery Service0
Data Poisoning Attacks on Off-Policy Policy Evaluation Methods0
Debiasing Samples from Online Learning Using Bootstrap0
Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space0
Defining Admissible Rewards for High Confidence Policy Evaluation0
Show:102550
← PrevPage 14 of 27Next →

No leaderboard results yet.