SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 111120 of 265 papers

TitleStatusHype
Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions0
Large-scale Validation of Counterfactual Learning Methods: A Test-Bed0
Limit Order Book Simulation and Trade Evaluation with K-Nearest-Neighbor Resampling0
Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm0
Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions0
IntOPE: Off-Policy Evaluation in the Presence of Interference0
Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation0
Off-Policy Evaluation and Learning from Logged Bandit Feedback: Error Reduction via Surrogate Policy0
Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space0
Debiasing Samples from Online Learning Using Bootstrap0
Show:102550
← PrevPage 12 of 27Next →

No leaderboard results yet.