SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 161170 of 265 papers

TitleStatusHype
Generalizing Off-Policy Evaluation From a Causal Perspective For Sequential Decision-Making0
HOPE: Human-Centric Off-Policy Evaluation for E-Learning and Healthcare0
Hybrid Value Estimation for Off-policy Evaluation and Offline Reinforcement Learning0
Hyperparameter Optimization Can Even be Harmful in Off-Policy Learning and How to Deal with It0
Inference on Time Series Nonparametric Conditional Moment Restrictions Using General Sieves0
Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm0
Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions0
IntOPE: Off-Policy Evaluation in the Presence of Interference0
Large-scale Validation of Counterfactual Learning Methods: A Test-Bed0
Off-Policy Evaluation with Online Adaptation for Robot Exploration in Challenging Environments0
Show:102550
← PrevPage 17 of 27Next →

No leaderboard results yet.