SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 141150 of 265 papers

TitleStatusHype
Safe Evaluation For Offline Learning: Are We Ready To Deploy?0
Sample Complexity of Nonparametric Off-Policy Evaluation on Low-Dimensional Manifolds using Deep Networks0
Sample Complexity of Preference-Based Nonparametric Off-Policy Evaluation with Deep Networks0
Scalable and Robust Self-Learning for Skill Routing in Large-Scale Conversational AI Systems0
Scalable and Safe Remediation of Defective Actions in Self-Learning Conversational Systems0
Scaling Marginalized Importance Sampling to High-Dimensional State-Spaces via State Abstraction0
Semi-gradient DICE for Offline Constrained Reinforcement Learning0
STEEL: Singularity-aware Reinforcement Learning0
Smoothed functional-based gradient algorithms for off-policy reinforcement learning: A non-asymptotic viewpoint0
Stabilizing Temporal Difference Learning via Implicit Stochastic Recursion0
Show:102550
← PrevPage 15 of 27Next →

No leaderboard results yet.