SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 2130 of 265 papers

TitleStatusHype
Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy EvaluationCode1
Counterfactual Evaluation of Slate Recommendations with Sequential Reward InteractionsCode1
Off-Policy Evaluation and Learning for the Future under Non-Stationarity0
A Principled Path to Fitted Distributional Evaluation0
Semi-gradient DICE for Offline Constrained Reinforcement Learning0
STITCH-OPE: Trajectory Stitching with Guided Diffusion for Off-Policy Evaluation0
Characterization of Efficient Influence Function for Off-Policy Evaluation Under Optimal Policies0
Stabilizing Temporal Difference Learning via Implicit Stochastic Recursion0
DOLCE: Decomposing Off-Policy Evaluation/Learning into Lagged and Current EffectsCode0
Off-Policy Evaluation for Sequential Persuasion Process with Unobserved Confounding0
Show:102550
← PrevPage 3 of 27Next →

No leaderboard results yet.