SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 101110 of 265 papers

TitleStatusHype
Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation0
Adaptive Trade-Offs in Off-Policy Learning0
Counterfactual Analysis in Dynamic Latent State Models0
Balancing Immediate Revenue and Future Off-Policy Evaluation in Coupon Allocation0
Generalizing Off-Policy Evaluation From a Causal Perspective For Sequential Decision-Making0
Generalized Emphatic Temporal Difference Learning: Bias-Variance Analysis0
Balanced off-policy evaluation in general action spaces0
HOPE: Human-Centric Off-Policy Evaluation for E-Learning and Healthcare0
Consistent On-Line Off-Policy Evaluation0
Finite Sample Analysis of Minimax Offline Reinforcement Learning: Completeness, Fast Rates and First-Order Efficiency0
Show:102550
← PrevPage 11 of 27Next →

No leaderboard results yet.