SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 101110 of 265 papers

TitleStatusHype
A Review of Off-Policy Evaluation in Reinforcement Learning0
Doubly Robust Kernel Statistics for Testing Distributional Treatment EffectsCode0
Low Variance Off-policy Evaluation with State-based Importance SamplingCode0
Counterfactual Learning with General Data-generating Policies0
Offline Policy Evaluation and Optimization under Confounding0
Policy-Adaptive Estimator Selection for Off-Policy EvaluationCode0
Counterfactual Learning with Multioutput Deep KernelsCode0
Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation0
Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions0
Local Metric Learning for Off-Policy Evaluation in Contextual Bandits with Continuous ActionsCode0
Show:102550
← PrevPage 11 of 27Next →

No leaderboard results yet.