SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 4150 of 265 papers

TitleStatusHype
Long-term Off-Policy Evaluation and LearningCode0
Hyperparameter Optimization Can Even be Harmful in Off-Policy Learning and How to Deal with It0
Data Poisoning Attacks on Off-Policy Policy Evaluation Methods0
Methodology for Interpretable Reinforcement Learning for Optimizing Mechanical Ventilation0
Doubly-Robust Off-Policy Evaluation with Estimated Logging Policy0
Predictive Performance Comparison of Decision Policies Under ConfoundingCode0
Efficient and Sharp Off-Policy Evaluation in Robust Markov Decision ProcessesCode0
Cramming Contextual Bandits for On-policy Statistical Evaluation0
Bayesian Off-Policy Evaluation and Learning for Large Action Spaces0
On the Curses of Future and History in Future-dependent Value Functions for Off-policy Evaluation0
Show:102550
← PrevPage 5 of 27Next →

No leaderboard results yet.