SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 5160 of 265 papers

TitleStatusHype
An Instrumental Variable Approach to Confounded Off-Policy Evaluation0
Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning0
Counterfactual Analysis in Dynamic Latent State Models0
Balancing Immediate Revenue and Future Off-Policy Evaluation in Coupon Allocation0
Data Poisoning Attacks on Off-Policy Policy Evaluation Methods0
Causality and Batch Reinforcement Learning: Complementary Approaches To Planning In Unknown Domains0
Counterfactual Learning with General Data-generating Policies0
Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation0
A Spectral Approach to Off-Policy Evaluation for POMDPs0
A Review of Off-Policy Evaluation in Reinforcement Learning0
Show:102550
← PrevPage 6 of 27Next →

No leaderboard results yet.