SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 161170 of 265 papers

TitleStatusHype
Towards Hyperparameter-free Policy Selection for Offline Reinforcement LearningCode0
Off-Policy Evaluation in Partially Observed Markov Decision Processes under Sequential Ignorability0
Stateful Offline Contextual Policy Evaluation and Learning0
Why Should I Trust You, Bellman? Evaluating the Bellman Objective with Off-Policy Data0
A Spectral Approach to Off-Policy Evaluation for POMDPs0
Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation0
Data-Driven Off-Policy Estimator Selection: An Application in User Marketing on An Online Content Delivery Service0
State Relevance for Off-Policy EvaluationCode0
Debiasing Samples from Online Learning Using Bootstrap0
Online Learning for Recommendations at Grubhub0
Show:102550
← PrevPage 17 of 27Next →

No leaderboard results yet.