SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 141150 of 265 papers

TitleStatusHype
Counterfactual Analysis in Dynamic Latent State Models0
Scalable and Robust Self-Learning for Skill Routing in Large-Scale Conversational AI Systems0
Off-Policy Evaluation with Online Adaptation for Robot Exploration in Challenging Environments0
Model-Free and Model-Based Policy Evaluation when Causality is UncertainCode0
Marginalized Operators for Off-policy Reinforcement Learning0
Bellman Residual Orthogonalization for Offline Reinforcement Learning0
Off-Policy Evaluation in Embedded Spaces0
Off-Policy Evaluation with Policy-Dependent Optimization Response0
A Multi-Agent Reinforcement Learning Framework for Off-Policy Evaluation in Two-sided MarketsCode0
Off-Policy Fitted Q-Evaluation with Differentiable Function Approximators: Z-Estimation and Inference Theory0
Show:102550
← PrevPage 15 of 27Next →

No leaderboard results yet.