SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 91100 of 265 papers

TitleStatusHype
STEEL: Singularity-aware Reinforcement Learning0
Variational Latent Branching Model for Off-Policy EvaluationCode0
Off-Policy Evaluation for Action-Dependent Non-Stationary EnvironmentsCode0
Off-Policy Evaluation with Out-of-Sample GuaranteesCode0
Inference on Time Series Nonparametric Conditional Moment Restrictions Using General Sieves0
An Instrumental Variable Approach to Confounded Off-Policy Evaluation0
Quantile Off-Policy Evaluation via Deep Conditional Generative Learning0
Offline Reinforcement Learning for Human-Guided Human-Machine Interaction with Private Information0
Safe Evaluation For Offline Learning: Are We Ready To Deploy?0
Scaling Marginalized Importance Sampling to High-Dimensional State-Spaces via State Abstraction0
Show:102550
← PrevPage 10 of 27Next →

No leaderboard results yet.