SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 191200 of 265 papers

TitleStatusHype
Bootstrapping Fitted Q-Evaluation for Off-Policy Inference0
Bootstrapping with Models: Confidence Intervals for Off-Policy Evaluation0
CANDOR: Counterfactual ANnotated DOubly Robust Off-Policy Evaluation0
Causality and Batch Reinforcement Learning: Complementary Approaches To Planning In Unknown Domains0
Characterization of Efficient Influence Function for Off-Policy Evaluation Under Optimal Policies0
CoinDICE: Off-Policy Confidence Interval Estimation0
Combining Parametric and Nonparametric Models for Off-Policy Evaluation0
Concept-driven Off Policy Evaluation0
Confidence Interval for Off-Policy Evaluation from Dependent Samples via Bandit Algorithm: Approach from Standardized Martingales0
Confident Natural Policy Gradient for Local Planning in q_π-realizable Constrained MDPs0
Show:102550
← PrevPage 20 of 27Next →

No leaderboard results yet.