SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 161170 of 265 papers

TitleStatusHype
Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy Evaluation Approach0
Towards Robust Off-Policy Evaluation via Human Inputs0
Triply Robust Off-Policy Evaluation0
Unbiased Offline Evaluation for Learning to Rank with Business Rules0
Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling0
Variance-Aware Off-Policy Evaluation with Linear Function Approximation0
Wasserstein Distributionally Robust Policy Evaluation and Learning for Contextual Bandits0
Weighted model estimation for offline model-based reinforcement learning0
Why Should I Trust You, Bellman? Evaluating the Bellman Objective with Off-Policy Data0
Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation0
Show:102550
← PrevPage 17 of 27Next →

No leaderboard results yet.