SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 251265 of 265 papers

TitleStatusHype
Offline Comparison of Ranking Functions using Randomized Data0
Efficient Counterfactual Learning from Bandit Feedback0
Off-Policy Evaluation and Learning from Logged Bandit Feedback: Error Reduction via Surrogate Policy0
Importance Sampling Policy Evaluation with an Estimated Behavior PolicyCode0
Counterfactual Mean EmbeddingsCode0
More Robust Doubly Robust Off-policy EvaluationCode0
Consistent On-Line Off-Policy Evaluation0
Optimal and Adaptive Off-policy Evaluation in Contextual BanditsCode0
Large-scale Validation of Counterfactual Learning Methods: A Test-Bed0
Bootstrapping with Models: Confidence Intervals for Off-Policy Evaluation0
Off-policy evaluation for slate recommendationCode0
Generalized Emphatic Temporal Difference Learning: Bias-Variance Analysis0
Emphatic TD Bellman Operator is a Contraction0
Off-policy evaluation for MDPs with unknown structure0
On Minimax Optimal Offline Policy Evaluation0
Show:102550
← PrevPage 6 of 6Next →

No leaderboard results yet.