SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 4150 of 265 papers

TitleStatusHype
Characterization of Efficient Influence Function for Off-Policy Evaluation Under Optimal Policies0
Concept-driven Off Policy Evaluation0
Confidence Interval for Off-Policy Evaluation from Dependent Samples via Bandit Algorithm: Approach from Standardized Martingales0
Confident Natural Policy Gradient for Local Planning in q_π-realizable Constrained MDPs0
Automated Off-Policy Estimator Selection via Supervised Learning0
Conformal Off-Policy Evaluation in Markov Decision Processes0
Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning0
Conformal Off-Policy Prediction in Contextual Bandits0
Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning0
Causality and Batch Reinforcement Learning: Complementary Approaches To Planning In Unknown Domains0
Show:102550
← PrevPage 5 of 27Next →

No leaderboard results yet.