SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 7180 of 265 papers

TitleStatusHype
On the Curses of Future and History in Future-dependent Value Functions for Off-policy Evaluation0
Off-Policy Evaluation in Markov Decision Processes under Weak Distributional Overlap0
Off-Policy Evaluation of Slate Bandit Policies via Optimizing AbstractionCode0
Distributional Off-policy Evaluation with Bellman Residual MinimizationCode0
Probabilistic Offline Policy Ranking with Approximate Bayesian Computation0
RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health InterventionsCode0
Marginal Density Ratio for Off-Policy Evaluation in Contextual BanditsCode0
When is Off-Policy Evaluation (Reward Modeling) Useful in Contextual Bandits? A Data-Centric PerspectiveCode0
Unbiased Offline Evaluation for Learning to Rank with Business Rules0
Robust Offline Reinforcement learning with Heavy-Tailed RewardsCode0
Show:102550
← PrevPage 8 of 27Next →

No leaderboard results yet.