SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 91100 of 265 papers

TitleStatusHype
Efficient Evaluation of Natural Stochastic Policies in Offline Reinforcement Learning0
Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning0
Efron-Stein PAC-Bayesian Inequalities0
Emphatic TD Bellman Operator is a Contraction0
Empowering Clinicians with Medical Decision Transformers: A Framework for Sepsis Treatment0
Enhancing Offline Model-Based RL via Active Model Selection: A Bayesian Optimization Perspective0
Generalized Emphatic Temporal Difference Learning: Bias-Variance Analysis0
Expected Sarsa(λ) with Control Variate for Variance Reduction0
Finite Sample Analysis of Minimax Offline Reinforcement Learning: Completeness, Fast Rates and First-Order Efficiency0
HOPE: Human-Centric Off-Policy Evaluation for E-Learning and Healthcare0
Show:102550
← PrevPage 10 of 27Next →

No leaderboard results yet.